We have a post from Ermest marked as the answer to the initial inquiry and are honing in on the epicenter of this topic @PaulPavlinovich , @ErmesT and to make it easier for other guides to follow here is a reset of the post thread:
HOW IS BEST TO UNDERSTAND THE PROGRESS PATHWAY OF THE EDUCATION OF THE GOOGLE ARTIFICIAL INTELLIGENCE FROM THE STANDPOINT OF ACCURACY IN DECISION MAKING?
Is there an accessible metrics showing where the AI is at in this developmental curve?
Let me frame this conversation thread with a key detail - we arrived to this aspect by discussing the spam filter’s level of correctness in making determinations in application of the posting rules.
When a report of a review or profile is made, the AI makes decisions and performs actions / inactions. The initial thread started with a look at a scenario where a business owner had posted a rating to boost the company rank under pressure of accumulating negative reviews. My focus was on how the Algorithmic Artificial Intelligence can determine investigatory necessities to “reason” through the scenario to a decision. Paul and Ermest you have both clarified that the rules should be followed and the report made, which I accept and support. From interest in understanding what we are doing as Local Guides who do report wrong data when we find it, the next question is WHAT SENSE DO WE HAVE OF THE AI’S ACCURACY?
The moral / ethics aspect is just one part of the picture as this is only one avenue of evaluation of correctness. Like you explain Paul, Telsa Auto Pilot crashes far less than human drivers but it does crash and the quality of ability of an AI will depend on how it was made and then trained. Naturally the training process results are monitored and adjusted for optimized results.
Accountability for correctness of AI decisions requires tracking of the effectiveness of the training process - or Grading Learning.
Millions of us experience impacts of a AI decisions - for example, a business owner told me last week that a person who had never used his business put a 2 star rating a few months ago. He looked at the profile and it showed many ratings posted in quick succession. He reported the review and the profile but it was not removed. He eventually talked to a Google Customer Service Agent and he said their response was this, “The AI is not removing the profile or the review because she did not break a rule.”
He asked me, “How did she not break a rule by rating a business she never used?”
I answered, “I believe the rep meant that from known data there is nothing for the AI to act on because the person could have had a bad experience and posted under another name and you could be a business owner who just wants to get a negative review removed, there is no data to clarify this.” In this instance for the Algorithmic AI “Right” means inaction due to lack of data showing cause to enforce a rule. While multiple ratings / reviews in a short time may trigger the spam filter, sometimes it does not.
Actually, I am not concerned about these views, I am interested in the science and learning how to see the whole picture as best I can.
We are all related.
Cowboy Z