Briggs's post
cancel
Showing results for 
Search instead for 
Did you mean: 
Level 10

Overhaul of the editing moderation system (Part 3)

Entry Level and Accessibility

One of the main features of Google Maps is that there are no entry-level barriers or tutorials when it comes to suggesting edits or uploading photos. Simply create an account, accept some sort of agreement and/or terms and conditions and then users, whether computerised or oblivious will be able to be on their merry way. For the majority of casual users or those that dive in head first, there is probably an estimated 98% of all Google Maps users who have not come across the Guidelines for representing a business on Google. Already this presents a questionable atmosphere into the mapping capabilities and protocols when updating the map. These guidelines are not interchangeable nor abusable in any way, these are simply the rules which must be strictly adhered to.

OpenStreetMap is full of easily findable information, such as how to create an account, how to start mapping, as well as their own guidelines on what should and shouldn't be there. In fact, when a user first starts out, they are directed to complete a compulsory tutorial first within a sandbox, and that gives them the first starting out features and opening them to a brand new world to edit. Of course subject to a crowdsourced moderation process that logs every single edit to be reviewed if another user finds a discrepancy, the changes should be able to be reversed with moderation privileges as outlined previously.

 

The Compulsory Tutorial/Vetting Process

Vetting-Quiz.gif

 

 For the time being, these guidelines are well-hidden from the average users' sight, so testing them to an extent or guiding them in the right direction should allow them to make better decisions when making amendments to the map. These tutorials only need to be short and sweet to minimise annoyance to the average joe, but for all users it must be unskippable. Furthermore, this reduces the influx of automated and computerised accounts making bulk edits all around the world by barring them with this tutorial.

Of course we can use hypothetical businesses to signify our examples within a defined sandbox, rather than the local businesses, but the only issue would be the availability of non-english tutorials and other local examples, which may need to be implemented all across the world. These vetting quizzes would give local guides and users a rough of idea of what is expected from them, whether it would be the naming conventions, information on the map marker, adding telephone numbers, categories and/or websites.

Along with examples already given above, using the survey system and/or an in-app method to test and enhance local guides' knowledge right from the start, we can start fostering desired behaviour at the start. Why should education only start at the moderation process when it should be done before they start making mistakes? It's generally considered and stressed that the Google could do a better job by communicating these guidelines, so why not introduce it to users right at the start? If they do not like a tutorial or an initial vetting process, how can we be sure that these users know what they are doing? There are zero entry-level barriers at the moment; all you need is an account and you can start. Along with the testing carried out from my mom's account, I managed to have hundreds of various edits approved whereas my Level 9-10 account failed at first. Considering that all new accounts start out with a considerable amount of trust at the start, their freedom to suggest edits and have them easily approved is confusing and a moderation process that spells out a possible disaster.

 

The visibility of the guidelines that are cohesively followed by both Google My Business owners as well as individual Google Maps users and local guides would be a vital alternative to the matter of responding to feedback and circumventing manual moderation. Rather than going through a reactive state, a recommendation to target poor quality edits at the beginning and taking a proactive approach to quality suggestions will vastly improve Google Maps by a long mile. The amount of hours, posts and discussions put into discussing whether or not edits are pending or not applied and attempting to speculate the possible billion reasons is a sheer cut to productivity, because at the end of the day these edits are not difficult to make.

If you want to complete information such as a missing phone number or website, it should be easily verifiable by anyone. If you want to move the map marker, a little nudge within addressed-defined parameters would be easy with a bit of research regarding indoor maps. If you want to change the name, look at how the guidelines vet your suggestion, and if it works, it works if you can get a regional lead who is actually familiar with your area to have a look at it for you. Not all edits are snowflakes.

Level 4

Re: Overhaul of the editing moderation system

Great post @Briggs, as I've already mentioned in another thread which references this one.

 

After a bit of thought, I can propose the following enhancement: If the Google masters don't want to pick up on this due to some paranoia about spammers on every corner or just don't want to get burned by what data their algorithms say is untrustworthy and they themselves don't have the resources or the smarts to review, I think you could propose this kind of community reviews and feedback as a layer you could turn on, the same way you do with the satellite and traffic layer.  If it's on, you see the "questionable" edits. If it's off, you don't. i.e. you could have a "community contribution layer", which goes a half way in absolving them of any responsibility in the matter, albeit, it does "downgrade" the local reviewers' status. 🙂

 

Secondly, I would just like to highlight the concept of investment. A concept that permeates your idea but is just not explicitly verbalized, and correlates closely with the concept of trust. As human beings, we tend to trust more those who legitimately invest more in the causes/projects we believe in and that works out pretty well for us. So whatever the final system you propose, there should be no trust* without investment and, in this context, investment is counted as useful and correct information added to the maps (kind of like the current points system). This has the advantage of putting up further barriers to spammers and the like. To spam then requires two steps: a) Make a legitimate investment to acquire trust. b) Publish spam. The respective return on investment (ROI) of the spammer is then drastically reduced, due to point 1).

* Note: When I mention trust, I'm referring to trust to review data that was rejected by the algorithm but where the original author appealed. Not trust to contribute stuff that can easily be checked and approved by the algorithm.

 

Just my 2cents

Level 10

Re: Overhaul of the editing moderation system (Part 3)

@Briggs the mandatory tutorial is a great idea.

I'd love to see implemented, and resemble something we, as SVT, had to go on before publishing to real businesses: we had to answer a long list of questions, then we had to try an empty-business publishing, and then we had to create at least 10 listings before become trusted.

Translation could be ensured by the crowdsource community, as well by experienced LGs like me and you.

This could really avoid a lot of issues.

Level 10

Re: Overhaul of the editing moderation system

Machine, Local Guides and Google Moderator all need to collaborate. 

Level 8

Re: Overhaul of the editing moderation system

Hi @Briggs,

Compliments and kudos for your efforts to turn a pain into ideas for a solution.

 

Personally, I see challenges in your proposals that would be difficult to manage. Like the quality of the work of the volunteer task force, particularly if some contributors are driven by points rather than by the ethos of the map. Secondly, as mentioned before: communities where the community owns the project, have very different dynamics than projects where you have the owner and benefactor (Google) and its army of volunteers to help. The moment you separate the ownership from the people that manage it, you get tensions between the different agenda's/ motivators.

 

What resonates the most with me (surprise, surprise), is your advocacy for training and testing. I have made similar suggestions at the beginning of 2017, but you just enforced my belief that the Gamification of the Local Guides Program is perfect to integrate exams. How many computer games out there do not require players to pass a certain test before they move on to the next level of the game? Why not require Local Guides to pass tests where the game system verifies their knowledge of the rules and guidelines? 

 

Since not all Local Guides contribute the same, it should be an opt-in process. So at the start, we are all on the base novice level and each individual decides if he/she wishes to opt-in the next level for photo contributions, map edits, etc. Once I have passed the test related to my knowledge of the Naming rules, my map edits correcting dirty names should have a different weight (trust-score) to them, compared to a novice user that makes the same edit. 

 

Besides testing our knowledge about the rules, as per your examples in the flashing gifs, we should also be tested about our skills related to HOW. It should be (fairly) easy to create a Local Guides App which includes a so-called "sandbox" app, where Google purposely places dirty data on a Maps screen and asks us to clean the map. In other words, Google already knows what the correct data is on this "dummy" map and uses it to test our skills how to edit the map. This game mode could directly give feedback to our edits, like the coaches used to do in the early days of MapMaker. In other words, offer us a "flight simulator" with a virtual coach/ trainer that helps us improve our performance.

My next contribution to your "quality program" ideas, zoom in on the performance of the Google Map Moderators. You talk a lot about monkeys these days 😉
It doesn't matter if we are talking about hired 3rd part contractors or actual Google staff, all are subject to conflicting drivers. On the one hand, the drive to get through the queue as fast as possible. On the other hand, to verify each job in the queue with quality.  If teams that verify our edits manually, including their managers, get scrutinized on quantity, it is not difficult to guess the outcome what drives them mostly when at work.  To overcome this conflict of interest (quality map and respect for contributors, versus processing queues, fast and cheaply) it requires quality control.

 

Now Google hopefully already has processes in place to do this type of quality control internally. But I think that it does not take much tweaking in the verification process to ADD A LOOP in the workflow, where we (the contributors) give feedback on the outcome of a verification performed by either human or machine. All it takes is to (certain level contributors) to answer a short questionnaire after receiving a NOT APPLIED response that allows us to express why we think the outcome was a false positive. Our feedback then feeds into a performance-score for each staff member. The moment a staff member gets flagged, a person from a different department (like Internal Affairs in the police force) starts monitoring the performance of this person.

 

Of course, big game changers like this should be tested and requires tweaking. I am happy to be a lab-rat for the development team to see if an alternative approach performs better. Just ask.

 

Happy holiday season wishes to all, JeroenM