1 2
5 6
Memory
She/Her
Site Admin, Skilled player (1556)
Joined: 3/20/2014
Posts: 1765
Location: Dumpster
DrD2k9 wrote:
I mostly understand what the ratings are used for regarding the site. But if so few are concerned with the ratings to begin with, should we really be using them to determine these things on the site?
This is part of why I want to know why people don't rate TASes. If people's problems with the rating system stem with execution, we could potentially fix that. If the problem is the whole concept of rating to begin with, then I don't think we should rely on it for these systems.
Disclaimer: I don't have any other suggestions for tier changes. But I don't like the idea that ratings affect player points. I personally feel player points (as a quantitative value) should be based more on the quantity of currently published content that person has produced, not on how others qualitatively perceive the content that's been produced; I have no problem with losing points due to obsoletion.
I dislike this idea because one could more easily farm player's points by barely pushing the bar with one's submissions as opposed to having quality impact them. If we are to have such a system, I feel it should not rely solely on quantity. The "number of published movies" statistic already exists, there is no need to duplicate it.
[16:36:31] <Mothrayas> I have to say this argument about robot drug usage is a lot more fun than whatever else we have been doing in the past two+ hours
[16:08:10] <BenLubar> a TAS is just the limit of a segmented speedrun as the segment length approaches zero
CoolHandMike
He/Him
Editor, Judge, Experienced player (895)
Joined: 3/9/2019
Posts: 695
In my case I only try to vote on games that I really understand well or are obvious in one direction or another. Also the criteria are so ambiguous. Technical could mean many things. What about runs that are extremely time consuming but boil down to brute force approach to find really fast sections? It may only use a single trick but may be dozens of hours. But should another run that uses some assembly information that only took the author a couple minutes to acquire get a higher rating? Entertainment is also poorly explained. Look no further than Masterjun's Mario 3 run. The video itself contains next to no game play but you had people rating wildly differently for different reasons. Although I think this is the better of the two criteria. Also I am from the USA and in our school rating system a 70 is just barely passing. So should I at least minimum rate 70 since almost all of these tases I find passable? Or should it be like a standard distribution where normally good criteria should be around a 5? So yea. It would need some changes if you want people like me to vote more.
discord: CoolHandMike#0352
Memory
She/Her
Site Admin, Skilled player (1556)
Joined: 3/20/2014
Posts: 1765
Location: Dumpster
CoolHandMike wrote:
In my case I only try to vote on games that I really understand well or are obvious in one direction or another. Also the criteria are so ambiguous. Technical could mean many things. What about runs that are extremely time consuming but boil down to brute force approach to find really fast sections? It may only use a single trick but may be dozens of hours. But should another run that uses some assembly information that only took the author a couple minutes to acquire get a higher rating? Entertainment is also poorly explained. Look no further than Masterjun's Mario 3 run. The video itself contains next to no game play but you had people rating wildly differently for different reasons. Although I think this is the better of the two criteria. Also I am from the USA and in our school rating system a 70 is just barely passing. So should I at least minimum rate 70 since almost all of these tases I find passable? Or should it be like a standard distribution where normally good criteria should be around a 5? So yea. It would need some changes if you want people like me to vote more.
Are you aware of the Voting Guidelines? Come to think of it, since rating a movie no longer takes you off the page you were on previously, I don't think this is linked anymore where you perform ratings.
[16:36:31] <Mothrayas> I have to say this argument about robot drug usage is a lot more fun than whatever else we have been doing in the past two+ hours
[16:08:10] <BenLubar> a TAS is just the limit of a segmented speedrun as the segment length approaches zero
CoolHandMike
He/Him
Editor, Judge, Experienced player (895)
Joined: 3/9/2019
Posts: 695
I do not know if I have ever seen that page...
discord: CoolHandMike#0352
Senior Moderator
Joined: 8/4/2005
Posts: 5777
Location: Away
CoolHandMike wrote:
Technical could mean many things. What about runs that are extremely time consuming but boil down to brute force approach to find really fast sections? It may only use a single trick but may be dozens of hours. But should another run that uses some assembly information that only took the author a couple minutes to acquire get a higher rating?
In a nutshell, technical rating serves the purpose of avoiding mismanaged expectations in common cases where a run has high technical rating and low entertainment rating (the opposite has also been a thing but is quite rare). If ratings were to be folded into one, these runs would most likely be at the bottom of the list despite being well-produced; having a separate metric to recognize a potential discrepancy is often useful—especially for the shorter runs where the subject of entertainment isn't as well-defined, or, conversely, very long runs that are likely to be skipped by anyone except people who have played the games in question. For this metric to be more representative of the actual level of technical prowess, it would need stricter guidelines and rating privileges—e.g. limited to judges and people who are deeply familiar with the game—but unfortunately we don't have enough of those for every TAS to justify that, so it ends up being a mixed bag of opinions subject to massive cross-pollination with entertainment rating. As it stands, different people have different scales and rating methods that may or may not be based on the guidelines Memory linked. Some base their rating on the impression of effort put into research and optimization involved, which often depends mainly on how verbose the submission message is or how closely they've been following the production progress. Some don't give it too much thought and put either a safely high value or otherwise a value they wouldn't be able to rationally explain. Some base their rating on the perceived likelihood and/or magnitude of potential improvement, with the obvious pitfall of guesstimating. Various combinations are also possible. Honestly, I don't think there is a method that wouldn't miss the mark completely every now and then, and there is no consensus about which one is better because all of them are very crude approximations. What constitutes the average and the baseline is also different between many people; e.g. some believe that 5.0 is the acceptable minimum for a published TAS (because we aren't supposed to publish bad ones!) while others use the entire scale regardless. Since we neither can nor want to enforce a single standard for this, you're free to come up with a system that you consider sensible and easy enough for yourself to remember and have a feeling for when rating runs down the line. As long as you can explain to yourself why you've given a certain rating to a certain run, it's probably good enough.
CoolHandMike wrote:
Entertainment is also poorly explained. Look no further than Masterjun's Mario 3 run. The video itself contains next to no game play but you had people rating wildly differently for different reasons.
Entertainment is subjective on so many levels that it's actually impossible to even make a guideline that everyone would agree with. Vote as you see fit.
Warp wrote:
Edit: I think I understand now: It's my avatar, isn't it? It makes me look angry.
Aran_Jaeger
He/Him
Banned User
Joined: 10/29/2014
Posts: 176
Location: Bavaria, Germany
Considering that a while back when Nymx was bringing the topic up to me and we were discussing it for a while since I had put thought into it before to build an opinion on it, and that (to my surprise) Nymx then quickly made a thread for it, http://tasvideos.org/forum/viewtopic.php?t=21070 , (though he could have searched for already existing ones on the topic) where he mentioned our discussion, but since there was quite a bit to say on the topic from my side, and because I prefer making posts in a detailed and complete manner regarding the things I have or want to say (to reduce unnecessary double posts and avoid misunderstandings) and wasn't prepared to make a write-up and didn't feel like making one at the time spontaneously to explain myself (though Nymx didn't go into the points explicitly that I brought up, so I think there wasn't much of a reason to feel like having to do so, but none the less), I guess I'll make use of this opportunity now then. There is some guidelines for rating on TASVideos ( http://tasvideos.org/VotingGuidelines.html ), but it seems those are lacking further explanations or specifications on what value corresponds to what conditions being met or not met. I think ideally one would want large sets of easily through technical and entertainment ratings comparable TASes, but there's some problems in the way of getting to such a situation. There is no uniform/consistent/generally used scale (i.e. no consens) and mapping from abstract general or concretized conditions for what effort or lack of tests or improvability/known improvements or degree of uncertainty about the knowledge of a TAS's author over game aspects should correspond to what values from 0 to 10. As example, there is PJboy's personal scale ( http://tasvideos.org/PJBoy.html ), and I'd expect this to not be uniformly compatible with others' scales that one can find. If every user would rate every movie, and if every user over time consistently goes by and keeps the same (personal) scaling for rating and their values, then (speaking from the perspective of an idealized mathematical formulation of it) there would be a unique resulting "averaged out" scale that'd be applied to all movies, such that there would be no problem comparing TASes with each other within a consistent context, because the discrepancies between different people using different scales would cancel out if everyone rates every movie. [However, I guess this would be only true as long as no scale limit like 10 or 0 restricted a rater in the choice of the value that the person wanted to assign to a movie when that wanted to be assigned value would otherwise lie beyond that, while others would be able to exceed that range using scales that have a larger (more nuanced/detailed) range of differentiated movie qualities in mind to which they map their personal entire 0-to-10 spectrum within which they can assign values and might be able to assign values near their top or bottom limit that someone else cannot assign because his or her scale caps out already at their maximum (10) or minimum (0) rating which if translated to a more stretched out scale might mean e.g. a 8 or 3 respectively to them. So that what for one person would be a 10 would then have to be e.g. a 13 or 11 for someone else using their (assumed fixed) scale.] A metaphor to explain what I'm trying to address would be some situation like this: ''people are feeling the same temperature but go by different scales, e.g. let's say Fahrenheit or Celsius, and take the corresponding values from the respective scales and assign them as their ratings, while leaving out the (potentially crucial) indicator (or an explanation on some personal page) for what scale they used, so if one doesn't know if they used Fahrenheit or Celsius, if one just has a bunch of temperature numbers given, one will not be able to compare the real felt temperatures.'' Consequence of that then might be that effectively not only really the values from ratings themselves matter for movies but (probably much more so) the set of people that did rate them or didn't rate them, because if e.g. a bunch of ''Fahrenheit scale users'' rate a given TAS, then compared to if one had a bunch of ''Celsius scale users'' rate the same TAS, even though they might express and mean roughly the same ''temperature'' = technical quality (or entertainment), the resulting rating value can be far different, and can make a TAS look much better or worse, depending on situation. Still to some degree working (with respect to sustaining easy enough movie rating comparability) generalizations of an ''every user is rating every movie'' situation that would allow to cancel out movie rating inbalances caused through different groups of people rating some movies while not rating other movies (and this in a way that results in a chaotic complex distribution of rater-sets per movie, over all movies), could have forms as follows: 1. It'd be sufficient if it is not every user but just the same fixed (maybe through certain means restricted) set of users that do all of the rating, and this would still result in some consistent resulting scale from which the resulting rating of a movie emerges, provided those people that do rate movies keep doing so without changing their scale with respect to which they assign values. But obviously as consequence one would lose some users' opinions/ratings and introduction/contribution of their scale then, so a resulting movie rating might then not represent as much the general audience's view on a TAS, but the view of a restricted set of raters. 2. One could have a situation where the set of all rated movies is split into (preferably rather few) subsets of movies such that for each individual set it holds that any rater of any movie within the set also rated every other movie in that particular set of movies. And in this case, there would again be for each movie set 1 resulting scale so that one could at least compare movies among the same group or set, while the resulting movie ratings of 2 movies from different sets may be incomparable. 3. One could extend this even somewhat further, namely that one could get a few multiple cases of ''set X of users rates set Y of movies'', for intersection-free sets X when looking at different Y's/sets of movies, but the larger their count, the harder to keep track of what the individual averaged out scales are that are used for different sets of movies (so this would be a weakened form and if one gets to situations with more and more increased numbers of such movie and rater set correspondences, it expectedly will get chaotic, i.e. similar to the way it is currently). The issue of ''inbalances caused through different groups of people rating different groups of movies in chaotically mixed ways'' technically might not be anymore a ''big'' issue if one could view for every rating on every movie what user rated it and what their (fixed) scale is that they use (or if not fixed, one would need to know when they rated it and how their scale was at any relevant point at which they rated a movie, presume). Though then for a guest or user that wants to compare movie ratings or inform themselves in that regard, the task of comparing them expectedly becomes excessive. I guess the goal of a scale explanation would be to objectively/independently of the user that uses the scale provide or lead to the same values for ratings for when the same perception of (technical) quality of a movie is present in the different persons, similarly to clearly defined functions with inputs and outputs. I guess another idea would be to set up one or multiple test movies (maybe ones that in some reasonable sense ''cover sufficiently well the range of qualitatively different types of TASes'') together with a scale and with an explanation coming with it on what (technical) rating should a movie get according to some guidelines, and then have it be (forced or) optional for (new) raters to rate these test movies, and to then use 'how they differ in their rating of the test movies compared to the average rating' for calibrating what their personally to normal movies assigned rating value would translate into if someone else with the same perception of quality had rated the movie instead. (I.e. to e.g know or estimate that a rating 5.6 by person X would translate into a rating 5.6 + 1.3 if it were provided by person Y, if both had the same perception of quality of the same movie.) But I guess this would assume there exist some uncontroversial movies such that for them there exists a true (technical) rating for these given test movies for which a consens could be formed, which then could be used as reference for translating a rating value given by person X into what the rating value would be if person Y with same opinion/evaluation had done the rating instead (for movies of similar ''type''). Many movies lack ratings. Overall there isn't many users rating movies, so I think the larger the discrepancy/difference in the number of people rating is for any given 2 compared movies, aswell as the smaller the set of people is that rated both movies (relative to the total amount of raters for 2 movies), i.e. the larger the relative number of people is that only rated 1 of the 2 movies, the larger can probably/expectedly the resulting difference in assigned rating values be that is solely caused through different personal rating scales being used (either generally, or in particular by those that in a given movie comparison case only rated 1 movie rather than both). I guess if by groups of raters assigned values deviate too far from each other (like e.g. in the sense of a ''Fahrenheit versus Celsius'' comparison), then (provided one would know the scale candidates that exist and are out there and are majorly applied) one may be able to read out of all the assigned rating values from a given person already (to some degree of certainty) what scale might have been used, provided there's some prominent scales used rather them being of too individualized types. Generally speaking, I've been aware of these kinds of issues for a few years since I joined TASVideos and it's part of why I'm reluctant of rating movies (especially on the entertainment side of it). An alternative suggestion could be to maybe have different ways of rating movies, e.g. with a list of adjectives (maybe preferably with as little overlap of meaning between them to prevent dependencies but to keep/have an own separate meaning for each), like e.g. ''glitchy, funny, exotic, speedy, lucky'' (for which I took movie award categories as reference but other qualities could be chosen aswell instead of these) and assigning for each aspect a within a small number of degrees/steps between the opposite poles (from yes to no) lying value for example. I'm not sure what all the threads are on the topic that exist already and may be helpful if looked into, but one in particular would be this one on technical rating and player points: http://tasvideos.org/forum/viewtopic.php?t=20280 . Generally though I'd think it'd be easier to give technical ratings than entertainment ratings, but this would depend on what one in particular wants to express with it or what the technical rating should mean (the rerecord count? amount of time worked on a movie? would one want to assign ratings independently of at the time or later expected or known improvements?). So, for a technical rating to make sense, TASVideos might need to change or further specify what they want the technical rating to mean or to point towards (as in what aspects of technical quality to include, possibly even with example reference values assigned to some fixed specified existing movie for calibrating purposes, and maybe even stating which technical aspects among those that one can think of to exclude for clarification, even though users - before educating themselves on what the considered parameters are and which parameters one could come up with actually aren't among them - might/could think or expect they'd also fit in there). Hopefully some of these critiques, suggestions/considerations, and expressed ideas can help for revamping/improving the rating system.
collect, analyse, categorise. "Mathematics - When tool-assisted skills are just not enough" ;) Don't want to be taking up so much space adding to posts, but might be worth mentioning and letting others know for what games 1) already some TAS work has been done (ordered in decreasing amount, relative to a game completion) by me and 2) I am (in decreasing order) planning/considering to TAS them. Those would majorly be SNES games (if not, it will be indicated in the list) I'm focusing on. 1) Spanky's Quest; On the Ball/Cameltry; Musya; Super R-Type; Plok; Sutte Hakkun; The Wizard of Oz; Battletoads Doubledragon; Super Ghouls'n Ghosts; Firepower 2000; Brain Lord; Warios Woods; Super Turrican; The Humans. 2) Secret Command (SEGA); Star Force (NES); Hyperzone; Aladdin; R-Type 3; Power Blade 2 (NES); Super Turrican 2; First Samurai. (last updated: 18.03.2018)
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
CoolHandMike wrote:
Technical could mean many things. What about runs that are extremely time consuming but boil down to brute force approach to find really fast sections? It may only use a single trick but may be dozens of hours. But should another run that uses some assembly information that only took the author a couple minutes to acquire get a higher rating? Entertainment is also poorly explained. Look no further than Masterjun's Mario 3 run. The video itself contains next to no game play but you had people rating wildly differently for different reasons. Although I think this is the better of the two criteria.
Since 99.9% of people misunderstand what "technical rating" means, it might be a good idea to invent a new name for it. These 99.9% of people seem to think that it's a synonym for "frame perfection", and therefore every single game could theoretically have a 10 for technical rating. And no amount of explanations about how that's not what it's supposed to mean seems to help. I think the only thing that would help is changing the name of the rating category. Don't think of it as a techniCAL rating, but as a techniQUE rating. Subtle but big difference at the same time. It's quite a subjective "coolness" rating. How many impressive speedrunning techniques are being used (both visually on screen, and in the background work that was necessary to make the run). Don't think of it has clocking someone's time in a 100-meter sprint race, but the techniques used in a skateboarding half-pipe competition. Not all games lend themselves for a perfect technical rating in the exact same way that they may not lend themselves for a perfect entertainment rating. Maybe they are too straightforward and there just isn't much there to show any cool TASing techniques and knowledge. And it's completely OK to be subjective about it. If you personally think that it wasn't technically very impressive, that's fine. Just like with the entertainment rating. Which, by the way, IMO should be more about how enjoyable the run was to watch independently of the technique aspect. Again, highly subjective and personal, but that's just fine.
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Mothrayas wrote:
This question has been asked a lot of times, and usually it comes down to the rating form being too much hidden away, requiring too many clicks to use, and just being over-complicated in general.
When that part of the backend code was developed, Ajax techniques did already exist to a large extent, but they were still rather new. The implementation unfortunately uses very old-fashioned web forms from the 90's. A more convenient implementation would be just some kind of rating scale (which is visible and editable if you are logged in) directly on the movie's description box which you can simply click at any time to change, and it will update the rating in real-time without having to explicitly submit anything. A rating scale between 0 and 10, with no decimals, would probably be just fine. (In retrospect it might have been a mistake to add the decimal part.) But somebody would need to implement that...
Skilled player (1022)
Joined: 1/9/2011
Posts: 231
It seem like there's a whole lot of discussion on telling people HOW to rate movies and not how to get people TO rate movies. You can apply all the metrics you want to the system, but they won't be effective if there are still only 3 people rating any given movie (if that). One way to get a few more people to rate movies is to include a special note after they vote for a submission. Something to the effect of, "Please remember to rate this submission after publication." This is just a small push to remind people that ratings are still important to the site. I realize that the number of people that vote on a submission is still small, but if it can increase the number of ratings per movie by even 1 then that would be significant at this point. Another possibility I would like to throw in is that since one of the problems is to actually get people to come back after publication, I would like to see the rating system available to individuals after they place a vote. BUT (inb4 this gets brought up) it needs to be absolutely clear that no one (not even the judges) can view these ratings until after it's been published. I recognize there's still a risk of ratings becoming inflated with this method, but I'd still like to hear thoughts about it.
Player (26)
Joined: 8/29/2011
Posts: 1206
Location: Amsterdam
Memory wrote:
This is part of why I want to know why people don't rate TASes. If people's problems with the rating system stem with execution, we could potentially fix that. If the problem is the whole concept of rating to begin with, then I don't think we should rely on it for these systems.
People love ratings. People will rate anything. The whole internet runs on ratings. Now here's how I rate something on my local food delivery site:
  • Click on one of five stars
And here's how I rate something on TASvideos:
  • Log in to the video part of the site, which is different from logging into the forums.
  • Click on 'rate this movie'
  • Figure out the difference between "entertainment quality" and "technical quality"
  • Click on the first pulldown
  • Click on a number
  • Click on the second pulldown
  • Click on a number
  • Click on the third pulldown
  • Click on a number
  • Click on the fourth pulldown
  • Click on a number
  • Click on send data
  • Close the extra tab that popped up with my results
...see a difference here? THIS is what's keeping people from rating. It's too many clicks, and there's no reason to have fractional rates, and people have to make up their own distinction between entertainment and technical.[/list]
DrD2k9
He/Him
Editor, Judge, Expert player (2213)
Joined: 8/21/2016
Posts: 1090
Location: US
To add to Radiant's list of how to rate a movie: The voting/rating guidelines tell viewers to consider the technical production of the run not just how technical it appears on-screen. Therefore, There are additional steps required to do an appropriate technical rating. *Click to open a new web page with the authors comments/submission notes to see how they made the TAS to begin with. *Hope that those comments are fully developed and not simply a reference to an earlier version's submission notes (which would then require opening yet another web page to read those notes, or even more if it's the run's been updated multiple times) *Read said note to hopefully understand what the author actually did to make the TAS from a technical standpoint. *Have enough knowledge of the game to know if what the author did really was very technically impressive or not. (While this may not truly be a requirement, many viewers will feel this way in regards to rating technical quality.) *Return to the original movie rating spot to actually do the rating itself.
Memory
She/Her
Site Admin, Skilled player (1556)
Joined: 3/20/2014
Posts: 1765
Location: Dumpster
Honestly I really dislike the very concept of a technical rating to begin with. Ultimately it either relies on the quality and thoroughness of the submission text if it is to include techniques that are not visible via watching or it simply tells how many techniques were visible via watching which will probably reflect in entertainment score regardless and doesn't seem that useful to me. Pretty much all other forms of media only have one rating metric: entertainment. When you rate a movie on say IMDb, you don't get offered a second rating for cinematography. It's just not needed. I understand that people want movies to be appreciated for their technical accomplishments, but I don't feel a technical rating is the right way to go about that. Ultimately, only those who understand the ins and outs of a game can truly appreciate the amount of effort that goes into making the TAS. Ultimately what actually gets appreciated is the documentation of said process rather than the actual achievement.
[16:36:31] <Mothrayas> I have to say this argument about robot drug usage is a lot more fun than whatever else we have been doing in the past two+ hours
[16:08:10] <BenLubar> a TAS is just the limit of a segmented speedrun as the segment length approaches zero
CoolHandMike
He/Him
Editor, Judge, Experienced player (895)
Joined: 3/9/2019
Posts: 695
Is there any way to entice people from youtube over to rate the movie on the site? Or to make use of the youtube thumb up/thumb down rating?
discord: CoolHandMike#0352
Player (26)
Joined: 8/29/2011
Posts: 1206
Location: Amsterdam
Memory wrote:
Honestly I really dislike the very concept of a technical rating to begin with.
Yes, and you'll probably find an near-linear correlation between technical and entertainment ratings anyway, because that's how people tend to vote if two axes are required (particularly if people don't fully understand the axes). This graph is a good example, Anyway, I get the impression that most people agree technical rating should be deprecated, but the site currently lacks a technical (heh) person to implement that change. So there's not much point in talking about it, until the mods and/or admins want to publically recruit a PHP coder to alter the forum. That shouldn't be hard to find, mind you...
Site Admin, Skilled player (1254)
Joined: 4/17/2010
Posts: 11475
Location: Lake Char­gogg­a­gogg­man­chaugg­a­gogg­chau­bun­a­gung­a­maugg
We've had a long talk with Nach regarding tech ratings, how useless they are in my opinion, and how incredibly helpful and accurate they are in his opinion. I think that we already know all the problems that relate to separating ratings, I mean I don't think we should completely drop one or the other, but rather combine everything into one scale. When you rate a movie on IMDB you just take everything into account and express how much you liked it, that's all. So some people may like technicality of a sub-one-second ACE movie, others may like a full run, same with optimality or anything else really. I see no problems with generalizing it.
Warning: When making decisions, I try to collect as much data as possible before actually deciding. I try to abstract away and see the principles behind real world events and people's opinions. I try to generalize them and turn into something clear and reusable. I hate depending on unpredictable and having to make lottery guesses. Any problem can be solved by systems thinking and acting.
Memory
She/Her
Site Admin, Skilled player (1556)
Joined: 3/20/2014
Posts: 1765
Location: Dumpster
feos wrote:
We've had a long talk with Nach regarding tech ratings, how useless they are in my opinion, and how incredibly helpful and accurate they are in his opinion. I think that we already know all the problems that relate to separating ratings, I mean I don't think we should completely drop one or the other, but rather combine everything into one scale. When you rate a movie on IMDB you just take everything into account and express how much you liked it, that's all. So some people may like technicality of a sub-one-second ACE movie, others may like a full run, same with optimality or anything else really. I see no problems with generalizing it.
I totally agree with this solution.
[16:36:31] <Mothrayas> I have to say this argument about robot drug usage is a lot more fun than whatever else we have been doing in the past two+ hours
[16:08:10] <BenLubar> a TAS is just the limit of a segmented speedrun as the segment length approaches zero
DrD2k9
He/Him
Editor, Judge, Expert player (2213)
Joined: 8/21/2016
Posts: 1090
Location: US
Memory wrote:
I understand that people want movies to be appreciated for their technical accomplishments, but I don't feel a technical rating is the right way to go about that. Ultimately, only those who understand the ins and outs of a game can truly appreciate the amount of effort that goes into making the TAS. Ultimately what actually gets appreciated is the documentation of said process rather than the actual achievement.
This is exactly why a technical rating doesn't provide much information to a casual viewer or even another TASer unfamiliar with the game. Speaking of casual (non-member) viewers of our publications...they aren't even given an option to rate movies. They can see the rating value, but can't contribute to the rating. Thus only members can have impact on the resulting rating; and when most members do actually contribute to how a movie is perceived, it's in the pre-publication discussion/voting. I'd speculate that a fair number of our members simply feel that going back and rating a run post-publication is an unnecessary extra (if not also tedious) step, if they've already given their opinion on a particular run. This also explains the possibility that most of our members don't care much about the post-publication ratings to begin with or they'd do them. Further, if a particular member has shown no interest in providing a simple yes/no/meh vote, how can we ask/expect that individual to want to provide an even more complex assessment of a run that they were never interested in to begin with? So the question becomes, who/what are the ratings intended for? If for displaying generalized perception of entertainment value: a simple 5 star rating system (as has been already brought up) would be sufficient to display this metric. And there's little reason to restrict this assessment to only members; we could allow the general public offer their entertainment perspective as well. If for rating technical prowess of runs/authors...this accomplishes little more than stroking the egos of our members while providing little to no pertinent information to a casual watcher. A technical rating based on anything other than what's visible to a casual watcher is meaningless to that casual watcher. I'd suggest that even for other TASers, the technical accomplishments/ratings of movies are rarely the reason they choose to watch a particular run over other reasons such as entertainment or general interest in the game being TASed. We claim that the underlying purpose of this site is entertainment. People don't want to have to work to be entertained. In general, people simply want entertainment provided to them. So doing (or even understanding) a technical rating takes extra work that most people aren't going to mess with when they're simply looking to be entertained.
Player (26)
Joined: 8/29/2011
Posts: 1206
Location: Amsterdam
feos wrote:
I don't think we should completely drop one or the other, but rather combine everything into one scale.
And I completely agree. When can we start?
Banned User
Joined: 3/10/2004
Posts: 7698
Location: Finland
Memory wrote:
Honestly I really dislike the very concept of a technical rating to begin with. Ultimately it either relies on the quality and thoroughness of the submission text if it is to include techniques that are not visible via watching or it simply tells how many techniques were visible via watching which will probably reflect in entertainment score regardless and doesn't seem that useful to me. Pretty much all other forms of media only have one rating metric: entertainment. When you rate a movie on say IMDb, you don't get offered a second rating for cinematography. It's just not needed.
It's more like traditional figure skating rating system, which rates the performance based on three categories: Technical merit, required elements, and presentation. In this case it's just technical merit and entertainment..
Memory
She/Her
Site Admin, Skilled player (1556)
Joined: 3/20/2014
Posts: 1765
Location: Dumpster
Warp wrote:
It's more like traditional figure skating rating system, which rates the performance based on three categories: Technical merit, required elements, and presentation. In this case it's just technical merit and entertainment..
Those figure skating ratings were delivered by judges, not by a more general audience. TASing is also more complex than skating when it comes to technical complexity. I don't see what merit that type of system offers over a more simplistic one for our purposes.
[16:36:31] <Mothrayas> I have to say this argument about robot drug usage is a lot more fun than whatever else we have been doing in the past two+ hours
[16:08:10] <BenLubar> a TAS is just the limit of a segmented speedrun as the segment length approaches zero
nymx
He/Him
Editor, Judge, Expert player (2234)
Joined: 11/14/2014
Posts: 932
Location: South Pole, True Land Down Under
Reviving this thread, to see how members feel about the changes to the movie rating system.
I recently discovered that if you haven't reached a level of frustration with TASing any game, then you haven't done your due diligence. ---- SOYZA: Are you playing a game? NYMX: I'm not playing a game, I'm TASing. SOYZA: Oh...so its not a game...Its for real? ---- Anybody got a Quantum computer I can borrow for 20 minutes? Nevermind...eien's 64 core machine will do. :) ---- BOTing will be the end of all games. --NYMX
1 2
5 6