|
Post by liamgriff74 on Nov 21, 2018 7:56:00 GMT -5
I’ve been banned from hb forums for criticising this, the criticism of hb is absolutely justified but seems they don’t want anyone to see criticism, hb have destroyed this game for Xbox users they are a complete joke, but they’ve had there money now so they don’t care, I hope Xbox users boycott the next title on mass so hb get what they deserve
|
|
|
Post by titan30003 on Nov 21, 2018 8:02:19 GMT -5
Epiphanic, Looking at your analysis, I have a couple questions 1) Did you mistype your E(x) for Web.com? You have E(x) = 1 for XB1, but then the numbers don't add up to your final numbers for all pro tours combined. I suspect E(x) should be 14 for Xbox on Web.com.
2) Here's my major question: You're assuming the same fundamental skill level within each tour. The problem as I see it, IF the Xbox really plays harder, then Q school did not divide people up according to fundamental skill level, but by corrected skill level (where you take into account platform effects). So, in the Web.com tour, the Xbox players would actually have much greater fundamental skill than PS4 players, but equal according to corrected skill level. So since this is how the tours have been split up, you wouldn't expect to see any meaningful differences within each tour between players likelihood to make the cut. The exception to this would be the PGA tour (which I need to think about some more), which only has the highest skill level (and there is a limit to how high a person's fundamental skill could be), so in this league it makes more sense to assume fundamental skill level is roughly equal. So what you would expect to see is exactly what you're seeing. Withing Web and European, XBox is making the cuts about as much as expected, but only because these very skilled players, who probably should be in a higher league, are about equal with corrected skill (by the design of the Q School). And then in the PGA tour, you would expect XBox players to disproportionately to miss the cut. So, this would be my prediction for future weeks. XBox players will rarely make cut in PGA, but will do about expected in Web and Euro. Eventually PGA will have even fewer XBox players due to demotions, and only the very highest of fundamental skill will stay, and then eventually you'll reach a steady state there, and E(x) will be close to reality.
So, the only way to overcome this potential major fundamental skill issues within leagues is for everyone to play the same tournament and do the same analysis there, which is what Q school was. So the original Q School analysis does not have this major confounding variable of different fundamental skills (well, it could, maybe PS4 players are better for some reason, but my point is there is no reason to expect it).
Maybe you understand all of this and have taken it into account somehow that I've missed. Or maybe I'm thinking about this incorrectly.
Here's an simplified example. Let's pretend the game is How Tall Are You? We measure group A and group B in different locations in a qualifying school. There are thousands of people, no reason to believe that Group A should be taller than Group B. We do the qualifying school and find Group A on average is 72 inches, but group B is 68 inches, and the hypothesis is that the Group B people were standing in a 4 inch hole. But we split up into leagues anyway according to what we measured, not true height. So the Web.com is only filled with people in the MEASURED 66 to 70 inch range from Q-school.
Then in Tournament 1 for Web.com (our tournaments are boring, we just measure people's heights again), we find that Group B people do about as well as Group A within this league, when we measure their heights again in the two locations. Well, obviously this should be the case, since BY DESIGN of Q-school, the Web.com is filled with people who measured in the 66 to 70 inch range (but in reality, the Group B players are 4 inches higher, and should be in a higher league). This should not be taken as any evidence at all that actually there is no difference in how Group A and Group B are measured, but is a result of how we split them up via Q School.
Anyway, sorry for the long post, I tend to ramble.
|
|
|
Post by paulus on Nov 21, 2018 8:13:39 GMT -5
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Nov 21, 2018 9:04:31 GMT -5
1st ever post, I'm guessing it was Doyley having fun with a new account while on vacation
|
|
Epiphanic
Weekend Golfer
Posts: 77
TGCT Name: Nicholas Aakre
Tour: CC-Am/TST
|
Post by Epiphanic on Nov 21, 2018 10:04:10 GMT -5
Thank you, titan30003. There was a mistake in the Web.com table. The number of XB1 players who made the cut should be 14, not 1. I have fixed the number. The E[X] is correct though. To get that I took the number of players who made the cut overall (72) and divided it by the number of participants in the event (139) to get the probability of a player making the cut. (51.8%). E[X] for Web.com XB1 is 18.6 because that's the number of XB1 players on that Tour multiplied by the probability of making the cut. I made the assumption of equivalent skill level within a tour because what we're interested in whether there is a difference between the platforms. Since the various Tours are (presumably) sorted by skill, I'm effectively stating that the null hypothesis is "there is no difference". Of course, your contention of "corrected skill level" could be true, but the data I worked with doesn't have a way to explicitly control for that. So while my analysis of the made cut percentage by platform suggests that there isn't a meaningful difference between the platforms, the reason for the lack of difference (i.e. whether "fundamental skill" is the same or "corrected skill" is the same) is not able to be answered definitively. Your suggestion of using Q-school data to better test whether there is a difference between the platforms is a good one. In fact, that's what Larry's analysis is at the beginning of the thread. And he found that PS4 players tended to score lower than their PC/XB1 counterparts. Now that could mean that the PS4 platform gives some inherent advantage to scoring (due to differences between how the game runs on each platform) or it mean that the general population of virtual golfers who play on PS4 are slightly better than the ones on PC/XB1. While there isn't an inherent reason to expect that to be true, it still could be. All the analysis in the OP says is that there is a difference in scoring between the platforms. It makes no judgement of the reason for that difference. You brought up great questions and points to consider. That really helps advance the discussion. I guess I'm concerned that the narrative that "XB1 is harder" is leading people to interpret the data presented to confirm that belief.
|
|
|
Post by titan30003 on Nov 21, 2018 10:21:32 GMT -5
Thanks Epiphanic. I understand your concern that people may be interpreting the data to confirm their beliefs. To try to distill my point though, your analysis is meant to test the hypothesis "XB1 is harder". You make use of an assumption that people are grouped according to what I call fundamental skill level. However, if the hypothesis is true, then I argued the assumption is ill founded, and your analysis is actually consistent with the hypothesis. If the hypothesis is false, the assumption is reasonable, and your analysis is consistent with the hypothesis being false. But our assumption's reasonableness should be independent of what we end up on the hypothesis. So, my main point is that analysis can't determine anything about the hypothesis.
It's sort of circular, I believe. You start with an assumption that in order to know if it's reasonable, we need to know if XB1 harder. But we're using it to determine if XB1 is harder.
So I believe what you've shown is that the results (ignoring Q school results) are consistent with "XB1 is not harder". But I believe my argument above shows that it's also consistent with "XB1 is harder". And I think the Q school results are an extremely powerful evidence that "XB1 is harder". Interested to see further stats as they come in.
Rereading your post, I think we're on the same page overall though.
(I play on PS4, so I have no dog in this fight)
|
|
Epiphanic
Weekend Golfer
Posts: 77
TGCT Name: Nicholas Aakre
Tour: CC-Am/TST
|
Post by Epiphanic on Nov 21, 2018 11:05:07 GMT -5
I believe I understand your point, Titan. Let me try to explain what I think it is and you can correct me if I am in error. A player has what you call a "fundamental skill" and a "corrected skill". The fundamental skill is what we can consider a player's true ability at the game. A player's corrected skill is a function of that true ability based on whatever platform that player is using. In fact, we can only observe a player's corrected skill. What you're suggesting is that the difference between a player's fundamental skill (true ability) and corrected skill (observed ability) is meaningfully different.
So how could we test whether that difference is meaningful? One way is the analysis presented in the OP. However, my contention is that analysis (great as it is) doesn't actually answer that question. All it says is that there is a difference between observed ability. That difference could be the result of some sort of platform effect on a player's fundamental skill (i.e. the skill distribution of each platform is identical). On the other hand, the difference we observe could be the result of a difference in average skill between each platform population (i.e. PS4 master race). It could also be a combination of those two factors or some other reason we're not considering!
If we could have a population of players that played on all three platforms we could do some sort of comparison of a player's performance on each platform. That would allow us to more confidently assume that the skill distribution of the sample population is the same between the platforms. That is a reasonable assumption on its own, but we don't know that is true. In my opinion, all the Q-School results can say is that the scores are different, not why they are different.
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Nov 21, 2018 11:24:22 GMT -5
This conversation has clearly gone above my pay grade!
|
|
|
Post by titan30003 on Nov 21, 2018 12:37:23 GMT -5
Yes, I think you understand my point completely. And I agree, there is a difference between platforms, and we can't be sure what the reason is. To me, my intuition says it's far more reasonable that it's a platform effect than anything else, but so far all that's been shown is that there is a difference in scores. And yeah, your suggestion would be the gold standard of tests, to get people who play on all three platforms (or even two platforms would be valuable).
Another big point I'm making is that it will very hard to learn anything by looking at results by tour, since these tours have been created by comparing "corrected skill", and so everyone in these tours have similar past corrected skill results (from Q school) and so would likely have similar future corrected skill results (and so you won't see XB1 performing worse in a given tour, except for pro, probably, which again, I haven't thought too hard about). So by observing things intra-tour, it won't say much at all about fundamental vs corrected skill, and if there is or is not a difference.
One thing I just thought of that I'm not going to do: What if we look at world golf ranking from before the new season of TGC Tours, and use that as a proxy for fundamental skill. Then compare where all of these people ended up. Looking at things as just an eye test, it does seem like there are disproportionately more XB1 players in the top 100 in WGR who dropped to tours where you wouldn't expect them, but I haven't looked in depth at all. Probably the analysis wouldn't be too convincing either way, but it would still be interesting to see. (maybe it's been done, I haven't read this whole thread)
|
|
|
Post by titaneddie on Nov 21, 2018 13:00:42 GMT -5
Hate to break it to you guys but i think you missed the swing tempo stats between platforms....I'll be interested to see the analysis why that doesn't effect scoring.
|
|
|
Post by titan30003 on Nov 21, 2018 13:05:08 GMT -5
I think I saw those. If I remember correctly, then I agree that's further evidence that XB1 is harder. But again, it could be said XB1 players are just less skilled for some reason (which I don't believe).
|
|
Deleted
Deleted Member
Posts: 0
|
Post by Deleted on Nov 21, 2018 13:09:00 GMT -5
Hate to break it to you guys but i think you missed the swing tempo stats between platforms....I'll be interested to see the analysis why that doesn't effect scoring. With Master clubs, yes. This is why I kept it to Pro tours only. I think the green and red below support the tempo difference. Keep in mind that scoring takes into account things not affected by tempo directly also. Your pre-shot prep (wind, elevation, rollout calc.), putting, knowing your yardages on your clubs, your ability to not rage/speed golf when frustrated, etc. etc. So the tempo problem alone can be masked by some players better than others. It would be interesting to quantitatively set a % for how much tempo affects scoring.
Anyways, I think this the green/red in cut % made and missed and then the bolded shows the XB1 players increase in % of the field the lower the tours you get:
PGA (132 players; 73 made cut, 59 miss cut - 55.30% made, 44.70% miss) PS4 = 109 (82.58%) 63 made = 57.80% 46 miss = 42.20% XB1 = 8 ( 6.06%) 1 made = 12.50% 7 miss = 87.50% PC = 15 (11.36%) 9 made = 60.00% 6 miss = 40.00%
Euro (152 players; 78 made cut, 74 miss cut - 51.32% made, 48.68% miss) PS4 = 102 (67.11%) 52 made = 50.98% 50 miss = 49.02% XB1 = 27 (17.76%) 12 made = 44.44% 15 miss = 55.56% PC = 23 (15.13%) 14 made = 60.87% 9 miss = 39.13%
Web (139 players; 72 made cut, 67 miss cut - 51.80% made, 48.20% miss) PS4 = 90 (64.75%) 52 made = 57.78% 38 miss = 42.22% XB1 = 36 (25.90%) 14 made = 38.89% 22 miss = 61.11% PC = 13 ( 9.35%) 6 made = 46.15% 7 miss = 53.85%
PRO TOURS (423 players; 223 made cut, 200 miss cut - 52.72% made, 48.28% miss) PS4 = 301 (71.16%) 167 made = 55.48% 134 miss = 44.52% XB1 = 71 (16.78%) 27 made = 38.03% 44 miss = 61.97% PC = 51 (12.06%) 29 made = 56.86% 22 miss = 43.14%
|
|
|
Post by moneyman273 on Nov 21, 2018 14:02:49 GMT -5
It’s no good asking HB if they have any data that confirms it one way or the other because it will open a can of worms that they don’t want to deal with. All they do is lock the threads without responding, unfortunately.
|
|
Epiphanic
Weekend Golfer
Posts: 77
TGCT Name: Nicholas Aakre
Tour: CC-Am/TST
|
Post by Epiphanic on Nov 21, 2018 14:33:07 GMT -5
Comparing the WGR pre-Season 5 to WGR now (or probably once they stabilize) adds a confounding variable because TGC2 and TGC2019 are different games. To compare them assumes that the best players in TGC2 are going to be the best players in TGC2019. Again, this is a reasonable assumption (I'd guess that there is a positive correlation), but how strong is the correlation and how similar are those correlations amongst the platforms (assuming that people don't switch platforms from one game to the next).
That's not to say that is a bad idea. I think it's interesting and could help provide additional information.
|
|
|
Post by coggin66 on Nov 21, 2018 15:58:39 GMT -5
What I'd like to see is whether there is a correlation between P-P tempo % and position. From my own XB1 experience (post last patch) I know that my scores are largely dependent on how many erratic tempo results I get and far less on how well I play. The other main factor is how well I putt, but there can be some cross influence here as the more random tempo instances, the less I care about the game and the worse I putt!
|
|