Thank you, guys, @Briggs @Osaka78forTRUMP , for questioning the purpose of this project. You are proving yourselves again as analytical thinkers.
@AkmalB , with regards to the legality of this project, I would like to say this: Even if a project would be strictly within the rules of the program, but potentially damaging for the Local Guides Program or damaging to Google Maps, I would not wish to go near it. I trust that the community here on Connect is mature enough so that through peer review, bad ideas (even if well intended by the initiator) are being “objected” by either the community or the Moderators. Sometimes it requires looking at the pros and cons from different perspectives, so if you see a reason how this project would hurt any one’s interests, please don’t hesitate to let me know.
The main concern would be: would the data this project reveals benefit hackers and bad people that wish to beat the system? I don’t think the data we shall share with the public would reveal (secret) insights in how the bots evaluate edits. We probably confirm with this research that making consistent safe edits improves one’s success rate, but that is not a secret.
What (I think) the project does:
Collect data for each edit a person makes. These are the data fields:
- URL of the location of the edit (not relevant for the statistics but great to have available when doing quality evaluations afterward).
- What was edited in the single submission (name, address, pointer, category, website? etc.).
- The outcome: either Accepted or Not Approved (not interested in Pending, or are we?).
- The subjective judgment of the participant if it was a high or low-risk edit.
- Was it the first attempt to get this changed on Maps or not?
Primary benefits from an individual point of view:
- Monitor performance. What is the success rate of my edits (e.g. 10% get’s Not Approved)?
- What type of edits do I score the highest and what type of edits do I score the lowest? (for example, a person does badly with editing categories and scores highly with Name Places).
- How is my personal judgment? Do most of my “safe” edits get approved?
- How do I perform compared to others in my area?
Answers that this project might provide:
- Are there areas where certain people (individuals, region, global) perform below expectation? Can this group be helped with education to improve their performance?
- Do people that risk assesses their edits before they submit them, do better than people that don’t?
- Are there trends (long term and short term). This one must interest you, Briggs, as it could tell you that perhaps after launching a new bot, many high performers like yourself do temporarily badly. On another level, we might find that very few people edit categories. Does this mean they don’t look at them when evaluating the data of a place they visit? If so, should we “campaign” for people/ members of a regional community to pay more attention…OR it could give us insight in what data is mostly “dirty” on the maps. Does that tell us anything?
- Is the success rate different when it is not a first attempt (should we ask if it was the second, third, fourth attempt? to be more specific?)
The additional benefits for unofficial LG Communities:
This program could greatly boost the regional communities. After all, when a Local Guide wants to participate, they get the application from their regional community. This could have a positive side effect that the regional community will get more local Local Guides becoming a member of their community. Secondly, it leverages the value the regional community has and thereby (potentially) improving the engagement by members of those participating communities.
The project is completely executed outside the Maps environment. So, I was not thinking about using the API to collect data. I am not totally up to speed with what can and cannot be done with the API, but as far as I understood one cannot call for “My Contributions” via the API. As participants, would manually submit their performance data per edit, I don’t see how this could in any way upset any system related to Maps.
@Briggs , I am leaving statistics related to valid edits that are being rejected out of the equation. After all, you as an expert would be able to have a good idea if you make correct edits, but of course, many of us don’t know when we are making a mistake (because we are not properly trained!). Rather than asking participants for their LG level (do we really think it has an effect on your trust score?), personally asking for the number of total edit contributions they have made, gives a much better indicator of their experience level when analyzing individual data. Having said that, seeing some of the poor advice given by high-level LGs here on connect, it still could mean that certain top contributors are poor in making quality edits…
Yes, we potentially change the behaviour of participants, but I was thinking in a positive way or am I missing something?