Kentucky Proves Equiratings is Pointless for Data and Safety

A 23% fall rate that could have been predicted with proper data analysis using correct risk analysis tools and demographic considerations.

Given that the USEA and USEF have tossed their hand in with Equiratings and people with no understanding of data analysis, unless conditions are perfect, this is going to keep happening. Face it, the USEA, USEF, and FEI doesn’t care about real safety so long as money keeps riders and horse going.

At no point did Equiratings look at risk on their website. They only predicted winners, proving that it isa betting program, not a risk analysis program.

17 Likes

Equiratings does not use their website for publicly analyzing risk, they use it for fan engagement. If you want risk analysis, you pay for it, and it’s not public domain.

No comment on the rest of your post about “proper data analysis”. It’s clear you have a ‘correct’ way in mind, but as long as neither you nor Equiratings are sharing your algorithms, it’s not going to help anyone on a public forum.

9 Likes

My algorithm is simple multivariate statistics that I’ve published in scientific and medical journals. I use Odds Ratios combined with logliklihood or Cox regressions to measure variable effect on outcome parameters. I verify the test variables using ANOVA methods as well and categorization of the data demographic, including participants.

Hundreds of experts around the world do this every day to predict risk. Nothing secret or nefarious. Just good data analysis as was done by the FEI IN 2016 under their Eventing Audit. It was carried out by expert epidemiologists and NOT riders or owners.

Did you know they found at the 2-star level women were more at risk for injury and death than men? Or that frangible devices INCREASED the risk of rotational fall by almost 180%?

And risk analysis should not require payment of it involves a public health issue such as horse and rider injury or death.

I find it ironic that you defend lack of transparency when in the past you were one of those leading the call for real data and safety analysis. What changed?

13 Likes

All work worth doing should be paid for. In this particular instance I think it’s something the national federations should be paying for and making available to their members, but no one should be expected to work for free - not you, not Equiratings, not anyone.

I am very strongly in favour of generalized data transparency and safety analysis. For example “there is a significant drop in rider falls when riders upgrading to their first Preliminary have 10 clear rounds at Training/Modified, and another significant drop at 14 clear rounds at Training/Modified”. I am very strongly not in favour of publicly publishing rider-specific predictive analysis about combinations about to head out on course. For example “Boyd is likely to have a rider fall on Long Island T today”.

As you say, it should be available when it is a public health issue. The first example I laid out has benefits for the eventing population at large and can help people make their decisions, which I would agree makes it a public health issue. The latter is rider-specific, which I consider a private health issue. The riders should have access to that information, but the public should not.

7 Likes

Let’s consider what you just said. Please reference the data about rider falls. It is NOT in the FEI dataset NOR has it been released by the USEF. Did your data just appear? You claim transparency, so be transparent.

I have never seen any data that shows success at a lower level increases safety at the next level up. As a matter of fact, if you look at the FEI yellow cards you see that riders with successful Training rides will definitely get cited for dangerous riding at the now two star level and that the majority of dangerous riding citations has migrated from the one star to the two and three star since they change the ranking systems.

Ironic that you add in the Modified division given it has not really ever been offered, yet there is already data that says it increases safety? Are you really telling the factual truth there? Do you realize that the numbers you just said were literally pulled out of the asses by those on the eventing safety committee without actually doing ANY data analysis?

That is why we forced the USEA to withdraw their proposal. The safety committee never even sent their proposal to the equine welfare committee for their analysis.

3 Likes

I said:

That was providing an example of data that I would like to see be transparently available. Not data that is currently transparently available. That’s why I said “for example”. At the time, I thought I was agreeing with you.

I don’t have data. I want data. I want transparency. I don’t have it.

You seem to think I have some sort of behind-the-scenes connection to a whole bunch of big scary secrets. I don’t. I want the same thing as you - transparent access to broad data sets that can improve safety for the population at large.

Where I seem to differ from you is that I want the people who produce it to be paid, and I don’t want rider-specific predictive analysis being published to the general public for commentary (though I’m happy for it to go to the rider directly). I don’t think that helps anyone, and it’s easy to see how it could hurt. While we’re disagreeing, I also am really not a fan of inflammatory titles in general.

14 Likes

My apologies for my misunderstanding.

It is not an inflammatory title. It is actually very factual. If Equiratings had truly predicted a 25% fall rate today do you think they would have run or at least have modified the course? If Equiratings did do that prediction and the FEI/USEF failed to act on it, it is negligence at best. In either case it shows that Equiratings has little to do with true risk analysis.

When past outcomes (competition placing) data used to predict success (a win or place) that is not risk analysis. It is risk tolerance as to betting (what economic risk analysis is). When risk is calculated solely on number of starts (as was done for this MERS rule) it is also only risk tolerance, e.g. how many falls are the public and riders willing to accept. It is not a true safety/risk methodology. Their baseline assumption at all levels is that every horse and rider is exactly the same. Otherwise they can’t make their method work statistically.

We all know horses and riders who have a stop that is actually LESS risky and MORE safe than a clean or winning round.

8 Likes

I’ll bite, despite the provocative title:

First of all, my understanding, as others here have said, is that what Equiratings does publicly as to fan engagement is different from the risk consulting services that they provide to the FEI and national federations. There’s more under the hood than the prediction centre.

Second, again, without complete visibility into what they’re doing, my understanding is that the risk assessment information they provide to the NFs and the FEI is not in the form of assessments of specific courses. I believe they’ve done studies on types of fences, etc., but the overall question of how “safe” the course is is really a question left in the judgment of the course designer and later the officials on the day — I’ve at least never heard of Equiratings (or any other data analysis group) doing a kind of specific “we predict that this many people will have horse and/or rider falls on this particular course” analysis before a major event. Again, I could be wrong, but my understanding is that’s just not the question they’re answering.

That said, it is my understanding that one of the things they look at is what is the Bayesian probability of a rider or horse fall at a given level, based on past performance (at that level and at the level below). And broadly and generally (and unsurprisingly), the basic conclusion seems to be that a high number of successful past completions are a predictor of future completions — or at least, that the opposite is true — that people who have a history of falling are more likely to fall in future. And I haven’t run the numbers myself, but I don’t think that kind of analysis would have predicted the fall rate we had today, nor the specific combinations who had falls (after all, people like William Fox-Pitt and Boyd and Clayton Fredericks have legions of successful international runs between them at this level and the levels below). It’s entirely possible — and the MER debate has suggested that at least many people anecdotally believe this to be the case — that how a combination has fared in the past at the level is the wrong question to ask to predict whether they’ll succeed in the future, but at least to a first order of approximation, I’m not aware of a currently-available way that everyone agrees “could have predicted” the 23% fall rate today. Could we have said “gosh, it’s raining, and they all think the course is hard, and that’s probably going to cause some problems”? Yes. Beyond that? It would be fascinating to know, but I just don’t think that’s the point of their analysis.

6 Likes

This article is old, but for context, this summarizes the basic approach in terms of risk management data that Equiratings provides the USEA for its members: https://useventing.com/safety-education/safety/equiratings-quality-index-faq, though I believe they send other information to the federation itself.

This is where there is a HUGE misunderstanding of what Equiratings is.

They have NEVER been part of any risk analysis or even data collection. They do not feed any data to the USEF that the USEF doesn’t already collect via the TD reports. The only data they use is results and starters. They do NOT use fence type, facility, weather, rider demographic etc. as was done using the FEI data in 2016.

Also, Baysean statistics is an invalid form for risk analysis as it is predicated on supposedly complete sets of data for comparison and has no method for hypothesis validation, e.g. Tukey’s, Wilcoxon,… Again, it works well in betting, economic, and financial risk tolerance but not in safety analysis.

I am very familiar with the FEI and Equiratings data sets as they are one in the same.

For example, now that Boyd Martin has had two falls in 12 months, will he be sent down based on the rules created by the FEI data? Equirating said gave him a prediction to win. Now he has two falls in 12 months which constitutes a dangerous rider by the rules. Again, this indicates how Equiratings is not a viable tool for risk and safety.

This is why people who are true experts in safety and data analysis are needed. There could have easily been a rubric that could give a risk assembly based on weather today. What do you think NASA and other entities do?

5 Likes

So what should be happening in your opinion? Equiratings or similar company be given access to course prior to competition and evaluate it for risk? Isn’t that the course designer job? What’s an acceptable percentage of falls? 5%, 10%, 15%? If the weather changes like it did today and analysis company determines that risk has increased to an unacceptable level, do those riders just not get to ride?

1 Like

So, why didn’t you do that? Serious question.

Up thread, you claim that “risk analysis should not require payment (if) it involves a public health issue such as horse and rider injury or death.” You also claim to have all the tools and knowledge necessary to account for weather, etc. And you seem to have an interest in how this information could benefit the sport.

Why didn’t you put some sort of information out there? Even if it wasn’t taken seriously before cross country started, if it was more accurate than any other info currently available, I would think it would have certainly made people take notice after the fact. Which could lead to change in the future. At least, discussion.

9 Likes

I think the other question is what’s the objection here. Is it that Derek made the course too hard, given that there has only been one five star since Pau 2019? That’s a fair question and many people asked it. Is it that we shouldn’t do 5* XC in the rain? Is it that we should assess rider qualifications differently? If we could have predicted it, how, and what should have been different?

11 Likes

Data is collected by the USEF and FEI via the TDs. What is done with that data is based on the willingness to release it. The USEF won’t even release their dataset to the USEA.

Equiratings and those who did the MERS analysis use the datasets publicly available from the FEI. Sadly, that dataset requires true experts in multivariant statistics as was done in the eventing audit as well as the air vest study published 2 years ago.

What needs to happen is EXACTLY what was done in 2016 by the FEI. A true study using relevant epidemiological methods where there are clear variables with defined parameters such that we can see the effect of fence type, rider demographic, horse demographic, weather, region of competition, injury and outcomes.

None of that is private information in many countries. The US has HIPAA but once the records are deidentified it can be used publicly. It is done all the time in medical clinical research under proper ethical oversight.

3 Likes

Continuing the discussion from Kentucky Proves Equiratings is Pointless for Data and Safety:

In my opinion, the EquiRatings metrics are mathematically simplistic. As we pointed out on another thread here on this topic, there are many ways to alias / bias the results to be non-sensical.

The value of any mathematical model, whether it is a simple mean value analysis, a Markov chain, or whatever, is in its ability to predict an outcome. But do their models actually predict results accurately?

Everyone seems to think that the factors which determine the outcome have statistically time invariant distributions. That is they are stationary. I would argue that many of the variables are not represented by stationary distributions. And there is no understanding of how the distribution is changing with time.

For example, didn’t a noted UK rider say before XC today in Kentucky they he didn’t feel that ready. He hadn’t run very much. Etc… clearly he is/was not the same rider he was when Eventing was up and running say a couple of years ago. So, metrics as predictors of his performance would be incorrect.

This seems quite obvious.

Having said that, it seems the committee wants EquiRatings to be the answer. So perhaps the problem statement will be adjusted to achieve the outcome. /end of cynical comment/

1 Like

How can they predict a fall rate on a course that has never been ridden?

2 Likes

You literally make my case with that statement. Let’s flip that around. How can they predict the finishers and order of finish on a course they’ve never ridden? If the model is so good at predicting success it should be equally good to predict failure. If it only predicts success it is a poor and biased model.

See, it’s not about risk, their model and methods. It’s about performance metrics like how a hospital or an aerospace firm will only consider the risk tolerance of the public to a given outcome. If the 737 has a history of successful flight there should be no risk. They failed to conduct a true risk analysis and relied only on past performance when the 737 MAX came out.

To actually predict RISK OF INJURY, you can not look at performance metrics. You have to examine all variables that are associated those risks. This was done in 2016 by the Eventing Audit. They considered specific fence types, specific rider gender and experience at a level, specific levels, etc. In so doing they were able to create the risk/safety analysis that lead to different fences, different course design, yet other things have not been instituted because that analysis only focused on FEI prelim and up.

Thus, Kentucky today, shows how Equiratings methods can not actually improve safety if used as a safety and risk tool.

4 Likes

This is correct, @RAyers. But it’s worse than that. The EquiRatings metrics can produce non-sensical results. They can indicate that two riders have the same performance predictor when clearly one is having problems and on a negative trajectory and the other has had problems in the past but seems ok now.

Performance predictive metrics must show two things to be different when we can see with our eyes that they are different.

5 Likes

If I am not mistaken, RAyers did offer his services, for FREE, years ago, and was shot down by the PTB.

10 Likes

I took one look at Fox-Pitts’ horse and said "I don’t like it. Surely a rider of his caliber can bring better than that to a 5*****. So we’re looking at courses, we’re looking at riders. How do we look at horses? And, of all the falls in KY, will we know the follow-up physical consequences to the horse?

1 Like