Thanks @elear for starting a discussion. I can see people have been expressing themselves openly and this is a good thing. There are lots of arguments I agree with.
First of all, let's talk about the questionnaire:
We have been brainstorming and discussing the questionnaire issue for months on Discord, but I can't say I am happy with the proposed draft. Why should we pay so much attention on the presentation post? 2 questions (#1 concerning presentation/formatting and #2 concerning writing style/language/engagement - of course the basic project details should all be there in order to consider the post complete). We could also skip this question or just narrow down the options to a) basic and b) well-written, therefore the translator who wants to get these extra points will know what to do.
Linear rewards like @alexs1320 has also mentioned. This is an argument I do agree with, when we have a 1100-word and a 1600-word contribution, we can't reward them the same (considering they are of the same difficulty level and semantic accuracy).
Defining more clearly the number of mistakes as well as the type of mistakes (semantics, spelling, typo) is also important as these define the quality of the text the LM receives.
Pre-defining difficulty levels of all projects is also important and leads to a common agreement, leaving no room for unfair scorings among all language groups (as there have been projects rated of "high difficulty" by some LMs while others rated them as of "average" or "low").
From what I (and others here) see, we are trying to "cut down on" points from very good translations if this questionnaire is implemented. This will most likely un-motivate people. We want to keep balances and make everyone happy (as long as this is feasible). "Punishing" good contributions because they are good I don't think will get us anywhere. Instead, we should try to help "mediocrity" get better. The feedback from the LM is supposed to help a translator become better and better, why should score lower excellent contributions? Just because the category's VP % is limited? And why not just drop the score/upvote % ratio a little?
Now regarding the category as a whole:
IMHO Translations have been expanding too rapidly. We have so many people on board in a few months, I doubt this will be sustainable in the long run and I am not the only one to point this out (@scienceangel has stated the same).
@rosatravels has tried to make the "translations community" more engaging and active, she has made very beautiful posts featuring LMs and translators, but let's be honest, translation posts are not "engagement material", their target group (here on Steemit) is very limited. We try to make our posts more enjoyable and give them a more personal tone or explain a few things about the project we are working on or some tools and features it offers, but nobody will read what we write in there (nobody else other than the LM who reviews it).
If you heard the last utopian weekly, several users pointed out that our average score is too high (it's in the 70s) I know it's not pleasant to hear but I think it would be fair to try to lower it a bit. Other categories score in the 50s and 60s. It doesn't mean "punishing" good users we will try to maintain high rewards for great contributions (that will still be able to reach 100 with more emphasis on lack of errors) but I hope you understand we must deal with the final score issue. Right now it's just too easy to reach high scores. About having too many users, it's kind easy to say it now, but remember that we come from a higher Steem price (which could have enabled us to support more users) and also most of the recruiting was done in summer where I don't know if you remember the activity was much lower than today. I don't think that having several translators is an issue but we will have to uniform the number of contributions per team, so maybe thinking about setting contributions limits per team may be something to consider too. So teams with more translators won't consume more VP.
I have missed a lot of updates these days with the new house and stuff, so I don't have the whole picture.
What I want to say is that high scores were easily attainable from the start, people have got used to the good numbers (we have had that conversation in the moderator chat some months ago, I don't remember the details now). I can speak for my team, I know we will still keep working even if the scores are lowered. I have supported lowering the score/voting strength ratio for a long time, what I meant by punishing is that if this questionnaire (the one I saw 2 days ago) is implemented, some users may be put off by the lower scores they will be getting for the same effort (and I don't mean those who are close to the good-excellent threshold, but those who continuously provide excellent examples of work).
Now, regarding the limit of contributions/team, we already set a limit on contributions/translator, I think it's more or less the same and more fair. If we set team limits then translators in larger teams will be even more "confined". Maybe setting a maximum limit of translators/team would be more fair (4-7 sounds reasonable depending the language and how broadly it is spoken).
I see lots of people have posted their views, I need to catch up with some more posts today...
I don't envy you, moving it's thought I did so myself in June (during our first recruiting window) so I appreciate that you found the time to answer my comment. In the new questionnaire we tried to reward more contributions without errors (also to add some objectivity to the scoring) and increase the score difference between average and excellent translations. About the number of users I think each team should not have a set number. We should look instead at the outputs of the teams. For example 5 German translators will make fewer contributions that 5 Spanish translators. So we need to allocate a fixed amount of contributions per team and then recruit enough users to keep that number balanced.