If the slower machine gets the same amount of WUs like the faster machine, you are basically forcing the faster machine to run out of work, so that they are all 'equal' in the end. It goes against 'the faster, the better' principle which is common sense in distributed computing.
You are viewing a single comment's thread from:
Well, if someone ran out of WUs for GRCStarter@Home, then they'll probably start crunching some other project. Science still gets done.
The point of distributed computing is to welcome more computing power, not to turn it away by artificially restricting the number of available workunits. Projects which don't have enough workunits are usually removed from the whitelist, makes very little sense to create such a handicapped project intentionally.
Yeah, in its current proposed state I wouldn't vote to whitelist this project.
of course not! this is a very minimal proposal version put out there to get feedback from the community and to see if anyone wants to help make it a reality.
Absolutely agreed! However keep in mind this is just a single project out of dozens. The point of the project is explicitly stated as a project which provides a mag boost for new users while helping new projects complete work and gain visibility. This means that anyone with a lot of processing can offer some up to the GRCStarter@home project with the intention of helping new users instead of gaining Mag. Once new users learn enough from GRCStarter -- or perhaps there is a time limit for low mag CPIDs or some other mechanism -- they are encouraged to move on to a project or projects they wish to crunch.
There are many details to work out, but a handicap project does not mean that WU will run out. It will constantly be crunching. What @personthingman2 is describing holds potential as a protocol. Hard to say right now if it will work, but it's definitely worth exploring as we build protocols.