
On August 13 2019 03:09 Starlightsun wrote: Thanks DarkPlasmaBall. That's fascinating how many applications there are. I'm learning quadratics in algebra and just wondered what they're used to model.
No problem! After completing our quadratics unit, I always assign my Algebra 2 students a paper... to write about any application of quadratics that they want, based on their hobbies, interests, experiences, etc.


(143(5) + 67)5 ___________ 2 Discuss A. 1950
B. 1955
C. 1956
D. 1953
2. (()()(10x))=5 solve for x. A. X= 0.5
B. X= 0.7 repeating
C. X= 0.5
D. X= 1
I think i can use Integral Calculator for solving such problems but still can any one slove this for me..

At least get your syntax right. Your first one is missing an = sign, and either brackets or a multiplication operator, in which case the answer is obvious, as the alternatives aren't interesting.
But more importantly, try solving them yourself and tell us where you get stuck, rather than asking us to do your homework.

Indeed. This is not a "we do your homework" thread.
And if the syntax were understandable, this mostly looks like 7th grade level of equation solving, inexplicably in the form of a multiple choice test.

Sorry dragontattoo, but this isn't a homework forum and you've mistyped at least one math problem anyway. Please consider asking your math teacher for extra help; I'm sure they would want to know if you're unclear about any of the material you're learning in class. Good luck!

I'd also be willing to help here, if you first make some effort to solve the problems, and formulate more clearly where your problems understanding the subject are.

Alright, could use some advice [statistics]
I am doing a data science project, it's a project of my choosing for my final project.
I am hypothesizing that there is official bias from some or all officials in NBA basketball, in the form of unfairly officiating certain games so that the underdog team is more likely to win. Essentially, referee corruption. A common claim among viewers of the NBA.
I've collected data from 20122018 seasons, I know what refs officiated which games, I know the quantity of fouls called and their types, and which teams they were called on. I have the pregame moneylines for betting (the odds), so I can see which teams was the underdog and how big of an underdog they were. I also have the results of the game.
Now, I can visualize this data. I can find and point out discrepancies.
My question is, what is a good statistically sound method by which to best "prove" this bias? Something like a confidence %. I know this is a vague question, if more information is needed just ask and I will do my best to explain.

Well this sounds interesting, but I have some questions about the general idea first:
Before you get to the rather complicated analysis of "underogs are more likely to win because of reason X", have you checked if the underdogs did in fact win more often then they should have? Because that's something you should be able to check with just the odds and results, and no further data. If your assumption of biased refs is right, you should notice a difference here. If not, well then I suppose even a more detailed analysis would be unlikely to show any ref bias.
Quantity of fouls alone does not seem to be enough: I could imagine a lot of reasons why the better or the worse team might actually commit more fouls, and therefore and unbiased ref would correctly call more fouls one kind of teams. For example, while I'm no expert on basketball, the losing team seems to commit a lot of tactical fouls in the final minutes of the game when they think they still have a chance to win. Assuming the underdogs are more likely to be in a losing position, you might find out that the referees are more likely to call fouls on the underdogs. But it's not a sign of bias (against underdogs), it might just be a natural sign of how the games is being played.
If you "only" consider fouls that were acutally called, you might not notice that "potential" bias also can show as noncalls of actual fouls. It seems to me that ideally you should have data of comparing actual fouls/nonfouls to calls/noncalls. Obviously, such data might be impossible to obtain however.
Also no, even if referees are more likely to help the underdog, this does not automatically mean they are corrupt. It's a quite common thing that neutral tend to side with the underdogs in sports contests, and maybe referees are not immune to this either (on a subconscious level).

Another question: how do you even get the odds of a team winning? Betting odds aren't unbiased either. There are many things that go into sports gambling that have nothing to do with the team's chance of winning. So treating that as your ground truth for building a referee bias detector is questionable as well.

On November 09 2019 19:36 Mafe wrote: Well this sounds interesting, but I have some questions about the general idea first:
Before you get to the rather complicated analysis of "underogs are more likely to win because of reason X", have you checked if the underdogs did in fact win more often then they should have? Because that's something you should be able to check with just the odds and results, and no further data. If your assumption of biased refs is right, you should notice a difference here. If not, well then I suppose even a more detailed analysis would be unlikely to show any ref bias.
Good question, I plan on investigating a bunch of stuff like that. And, even this question, "do the underdogs win more than they should", is a tough question to answer. I suppose expected value from betting on the winner every game should be roughly even money over the long term. Does that sound correct?
Quantity of fouls alone does not seem to be enough: I could imagine a lot of reasons why the better or the worse team might actually commit more fouls, and therefore and unbiased ref would correctly call more fouls one kind of teams. For example, while I'm no expert on basketball, the losing team seems to commit a lot of tactical fouls in the final minutes of the game when they think they still have a chance to win. Assuming the underdogs are more likely to be in a losing position, you might find out that the referees are more likely to call fouls on the underdogs. But it's not a sign of bias (against underdogs), it might just be a natural sign of how the games is being played.
Really good point and a big mistake on my part to not even think about that. I don't watch basketball much these days but I've seen enough that this should have been a fairly obvious factor.
It will be a lot of work but I could tag fouls/violations with some kind of extra value that contains information on how far ahead or behind the fouling team is. Also, I do plan on tracking quarter of the game as well  I think 2nd half data or even 4th quarter data may end up being way more important than data from entire games.
Anyways this was a crucial thing to bring up, thanks.
If you "only" consider fouls that were acutally called, you might not notice that "potential" bias also can show as noncalls of actual fouls. It seems to me that ideally you should have data of comparing actual fouls/nonfouls to calls/noncalls. Obviously, such data might be impossible to obtain however.
Man, I wish such data reliably existed. There is something called "the two minute report", and it does include nocalls. But sadly this data is only available for like 2017 onwards, and the report is only given for games where teams were within 3 points in the last two minutes. And the data is of course only for the last two minutes of the game.
But still, it might be a cool thing to take a look at.
But, I can at least compare actual calls on the winner vs expected calls on the winner (where expected in based on how many calls the losers opponents normally get, and how many calls the winner normally gets). Winner and loser would refer to the specific teams, not league averages, of course.
Also no, even if referees are more likely to help the underdog, this does not automatically mean they are corrupt. It's a quite common thing that neutral tend to side with the underdogs in sports contests, and maybe referees are not immune to this either (on a subconscious level).
That's a valid point and I certainly was never going to explicitly say that anyone was corrupt (i'd avoid even using the word).
However I was thinking about this, and I think it's important that I don't just look at betting odds for "who will win", but also look at point spreads. It's harder to analyze, but tying foul/violation calls to a team beating a point spread could also be pretty interesting. Again, doesn't prove corruption, but I'm not reaaaally going to say that this is what I am trying to prove.
On November 09 2019 22:40 Acrofales wrote: Another question: how do you even get the odds of a team winning? Betting odds aren't unbiased either. There are many things that go into sports gambling that have nothing to do with the team's chance of winning. So treating that as your ground truth for building a referee bias detector is questionable as well.
I found a good historical odds dataset for NBA, it has the opening and closing moneylines and point spread for 3 different betting websites.
You say betting odds aren't unbiased but I think they are by far the least biased estimator. They are certainly better than published statistical methods I've seen. I was considering also doing neural network prediction as well just for fun and comparing it to betting odds predictions.
But since you bring this up, it could be useful to measure just how good of a predictor the various types of betting odds are over my dataset and include this information.

Very interesting project idea! The first thing I was thinking is a probit model, where you'd use betting odds and whatever else data you have as explanatory variables to study whether the underdog wins or loses. It is a fairly standard way to estimate models with binary variables (such as your win/loss). You could also see whether the possible bias is equally large in different sub samples of your data, whether it is the same for every team or is there a difference if the betting odds are close or far away? Or that one ref or another may favour underdogs more? There is always a risk of running into really small subsamples if you start doing this, tho. But all in all I'm going to throw a wild guess that even if there is some true underlying effect just studying the whole league at once is going to drown it out in noise, it will be probably more interesting to study whether the bias exists in different subsamples and whether it is different from one subsample to another.
I'll assume that by saying "prove" you already knew that there isn't a bog standard way of doing this, and you are going to have to regardless argue why your chosen methodology is valid and why it is better than some other methods. I know there is a fair bit of literature on football (soccer) betting in the UK, I'd say that might be a good starting place to see how betting odds are treated in and what kind of statistical models are used when working with them.

edit: NOTHING to see here, move along
Damn I am stupid :3

On November 09 2019 03:25 travis wrote: Alright, could use some advice [statistics]
I am doing a data science project, it's a project of my choosing for my final project.
I am hypothesizing that there is official bias from some or all officials in NBA basketball, in the form of unfairly officiating certain games so that the underdog team is more likely to win. Essentially, referee corruption. A common claim among viewers of the NBA.
I've collected data from 20122018 seasons, I know what refs officiated which games, I know the quantity of fouls called and their types, and which teams they were called on. I have the pregame moneylines for betting (the odds), so I can see which teams was the underdog and how big of an underdog they were. I also have the results of the game.
Now, I can visualize this data. I can find and point out discrepancies.
My question is, what is a good statistically sound method by which to best "prove" this bias? Something like a confidence %. I know this is a vague question, if more information is needed just ask and I will do my best to explain.
May I know how you got along with this? sounds interesting.

John Horton Conway, to many known due to the Game of Life, but generally a great contributor in game theory and other mathmatical fields has passed away on Saturday due to Corona

You should probably credit XKCD when reposting a comic from there.

How difficult would it be to create a manual ladder ranking system similar to SC2 in excel? What would the logic look like behind each win/loss?

Depends on your goals for that system. Just having "a system" isn't hard. The winner gets one point, the loser loses on. This obviously has a lot of flaws.
So figure out exactly what you want, once you can formulate that clearly, you should be able to figure out how to design a system based on that.
Or read up on other rating systems, and try to understand why they are designed the way they are. Probably starting at the chess Elo rating.


Please log in (you can use your steam or reddit account!) or register to reply.


