Proftusovka will always find a way to try to tweak the methodology. “Let's you vote for us, and you vote for us” - Tagline's simple childish cheating began with this, which could just as easily be stopped. And then ring voting began (and it took a count to catch them), the creation of fake studios and other delights of life. Unfortunately, some of them still affect positions and are almost elusive. Quantitative secondary indicators Another way: measure one or another secondary indicator found in open sources and build a rating based on it. For example, Runet Rating / CMSMagazine went this way.
The methodology for their first studio ratings was Armenia Email List based on the analysis of TICs and PR sites from the client portfolio, which companies uploaded to their account. Or, for example, some mobile developer ratings are based on application ratings in platform stores (Appstore and GooglePlay). Mechanical advantages: relative transparency. An understandable format is an analysis of the implemented projects of the company. You can flexibly rank projects by industry and region, and create narrower cross-sections.
Minuses: Often such mechanics turn out to be one-sided and do not take into account entire market segments. For example, in the ratings of studios (based on technical information and PR portfolios) there is always no segment of creative agencies that create promotional websites. You can make a hundred promotional sites a year for the largest brands, but due to their specific nature, their life cycle is very short - they simply will not gain weight with search engines (and there is no such task). The agency can come "ready-made". The client developed the site for ten years with one contractor, and then a new agency came in and received a significant increase in the rating score without doing any work.