What is a "contest"? What are these contests popular among academics?
Yes, simpler is better, so I prefer an open contest to a curated contest, but I'd expect anything that honestly deserves the name "contest" to be most of the way there. I think the most important details are keeping track records and actually being adversarial.
It's "obvious to anyone who knows math" that math isn't totally
ordered. The Indians may have had one result the Greeks did not without
having much that the Greeks did. Some Greek results were only
rediscovered in the 19th century.
Surely it is less impressive that Madhava had the series without calculus than that the Greeks had calculus.
Exactly, you're ignorant of Hellenistic science and engineering. You mention the Sand Reckoner, yet you keep saying that the Greeks didn't have positional notation. But that's the least of it.
Your Indian example is 1500 years after, so it matches what I said.
You are not impressed by Hellenistic science because you don't know anything about it.
Maybe classical Greece wasn't superhuman, but it took 1500 years or more for the rest of the world to match 200 years of Hellenistic math and science, even with Hellenistic texts to read.
They definitely had base 10k positional notation. I'm pretty sure that they had base 100. I'm less sure about base 10. And, of course, the Babylonians had base 60.
I have no idea what subtle distinction you're making, but you're wrong. The Greeks had it all.
The Greeks used decimal arithmetic.
You spent half of your post on this, but I did not get the point until I saw this comment. It would be useful to add this comment as an addendum.
The fraud podcast was terrible. This is not someone trying to understand the world.
He equivocated on three axes: (1) elections always being fraudulent vs this election being particularly fraudulent; (2) mail-in vs in-person; (3) retail vs wholesale. It's good to consider multiple hypotheses, but he equivocated so that he could combine arguments from contradictory hypotheses.
An exit pollster doesn't know who you are, but the conversation can be overheard by your neighbors, whereas the danger from a phone pollster is only from the pollster himself, really only from the possibility of being targeted by someone impersonating a pollster.
1 Related to Kaufmann's claim, the wired story says:
people lie not only before Election Day about whether they intend to vote, but also afterwards, about whether they actually did. (According to Schaffner, college-educated people are particularly bad offenders.)
2 When I cite the exit polls, I'm using NYT presentation of Edison. I think I got the link from Kaufmann, so he's probably using it, too, although he may have more direct access. I see other sources using unattributed exit polls, but I think they're all the same. It claims that it will eventually be tweaked to match the results, but seems to claim that it was not originally. Here is an archive of the first posting, from 2 in the morning after the election, which would seem to claim not to be tweaked, but doesn't look that different.
It's not even clear to me that exit polls were accurate. They are consistent with the results, but they could contain canceling errors. In particular, they are massaged after seeing the results, so the fact that their totals are correct isn't much evidence.
Exit polls of mail-in voters are even done by phone. So exit pollsters have to understand the biases of phone versus in-person, which is itself correlated with vote.
A simple theory is that people lie about their plans, but don't lie about their past vote. But I don't believe it.
Kaufman says that shy Trump voters lie to phone pollsters, but don't lie at the exit polls. I don't think that is plausible.
More plausible is that they don't pick up the phone, but they do talk to exit pollsters, although that's odd, too.
The votes haven't been counted yet, but this looks to me like a 4 point miss, compared to a 1 point miss in 2016. That's worse than they promise, but it happens on a regular basis. Eyeballing Trafalgar's numbers, it looks like they were less accurate than mainstream pollsters in 2016, but more accurate than mainstream pollsters in 2020. They bragged about how they got the sign right in 2016, but that's irrelevant to predicting future success. And this year, they got the sign wrong, so few people were notice that they were the most accurate.
"Shy voters" is an ambiguous phrase. I don't think that anyone is lying about who they're going to vote for. But Trump voters do seem to be systematically unlikely to pick up the phone. In theory you asking people about their neighbors addresses that. This is related to the question of sampling corrections. I don't mean turnout, but the general problem of correcting for people who don't answer the phone, and finding a proxy for that to correct for. Again, this problem has been around forever. It's getting worse, but pollsters are getting better at dealing with it and I wouldn't predict which way it will go next time.
Sure, but how the most elite people train and compete seems like small potatoes, because there are so few elites. If colleges mimic elite colleges, that is a much bigger problem. Obviously they do, but I'm not sure how fast it attenuates as one travels down the status cline.
Also, that the consultants come out of their jobs with prestige seems to me worse than that they go into them with prestige. You can say that prestige should be about what people accomplish, but people claim that it already is about accomplishments; people claim that working at or just getting a consulting job is an accomplishment. It is this laundering of prestige that seems the big problem.
In most science fields you can't blind because people have research programs, not individual papers. Just read the bibliography and the most cited person is the the author of the paper or the advisor. A lot of this is maximizing publication count by disassembling papers into "minimal publishable units," but some of it is the reasonable consequence of research programs that existed a century ago.
This is much less true in the humanities, one way in which academic humanities is healthier than academic science. The unique way in which it is healthier?
In the paragraph you quote, Robin links to an older post in which he elaborates on this.
I feel like there's a mixed message here. If consulting firms just sell prestige, isn't it good that the prestige is fabricated? Sending productive people into these industries would be wasteful.
The book is about entry-level jobs, whereas the comment about academia is rather different. Academics have long careers and the opportunity to accumulate a long track record of real results.
Why did you write the nickname court post first, rather than this post? They're practically the same post, but the nickname case is so idiosyncratic. There is a big difference between them, which is that the nickname court is potentially applicable to very small groups, whereas the whole point of the cancel court is to address a large audience, but I don't see you writing much addressing that difference.
If you can't nail down what happened with popular vote, you aren't ready to look at state votes.
children getting worse schooling, reduced socializing
Children are getting better socializing and thus probably better schooling.
Everything you say is false.
I was wrong though. 538 wasn't special in calling the vote within 1 point. Everyone did.
Here is Sam Wang's final pre-election post
The state poll-based Meta-Margin is Clinton +2.6%.
National polls give a median of Clinton +3.0 +/- 0.9% (10 polls with a start date of November 1st or later)
The day after the election, he was shocked that Clinton had only won by 1.2 points. That's the 2 point error I mentioned. I don't know whether he prefers the first prediction or the second prediction. But if I can't tell what he prediction is to 0.4 points, he
has no business claiming a 0.9 point margin of error.
Anyhow, Clinton did not win by only 1.2 points. He only said that because he doesn't know jack shit about counting votes. Ultimately, she won by 2.1 points, which does fall within his 0.9 point margin of error.
The problem with Wang is that he assumed independence of polling errors. Shy voters is one phenomenon that could cause dependence, but there are many others.
Sure, if there had been a shy Trump voter effect in 2016, that would be a reason to expect one in 2020. But there wasn't.
He says that pollsters should be judged on their records. I agree! You should look at what actually happened, rather than listen to advertising lies. If you don't bother to look at reality, you incentivize him to lie.
He says that the effect varied from 3 to 9 points (7 points?). Yes, it really was about 8 points in ND, but that's because no one bothered to poll there. It was also -2 points in HI, for the same reason. In the close, heavily polled states, the error was about 2 points. If he were to tell the truth about 2016, I might listen to his claims about 2020.
He does mention one way of directly measuring the effect: asking people how their neighbors would vote. But he says that everyone copied that, so the existing polls shouldn't have a shy voter effect! He claims that he has other ways of measuring it, but demands that I treat him as a black box. No, I don't trust liars.
Why do you think that there is a shy Trump voter effect? There wasn't in 2016.
Why do you think modeling is a problem? This is a slowly growing problem and it's been fine.
I believe naive polling averages were off by 2 points in 2016, exactly what you'd expect. 538 was off by less than 1 point.
What good would a game do? If people don't want to tell you how the world works, why would they put this information in a game?
It seems to me that a game is useful to assimilate factual knowledge, that is, to bridge the alief/belief gap. For example, a game would be useful to teach the value of exponential growth in networking, and thus the importance of compounding contacts of contacts. But it's no better than the model of the world that goes into it. It can't tell you the details of how to go about networking if you don't already know.
There are a lot more small firms than large firms. It may be that the most innovative firms are small firms, but that is very different from saying that small firms are generally innovative. You don't think that large firms should innovate by buying random small firms, do you?
Also, if a small firm is more specialized, a single innovation could be very visible.
Lots of technologies were lost in the medieval period:
horse operated mills
We tried that. That's how we got here. Such a system creates the current system. An unguarded stream of money attracts a bureaucracy.
OK, I was confused and phrased this badly. I should have distinguished between genic SNPs and the exome. Yes, you can't just run LASSO on the exome, both because of lack of samples and because of the large size of the exome.
What you could do (and could have done in the original paper) is take your SNPs, declare some of them to be genic and some not, restrict to the genic ones and run LASSO. This is strictly more informative than running LASSO and then restricting to the exomic SNPs. The latter is what you do in the beginning of the paper. In the end of the paper, you try to use the whole exome for imputation, which is slightly better. But it's better in an orthogonal direction, so it's no excuse for not doing the first thing.
Yeah, I understood what you did and it was stupid. Shame on you.
You are making assumptions of independence and additivity that are probably true, but you could easily do a much better test of just running LASSO again.
The state of these SNPs cannot be determined from exome-sequencing data. This suggests that exome data alone will miss much of the heritability for these traits—i.e., existing PRS cannot be computed from exome data alone.
That's a pretty weak suggestion. It would be much better to rerun the analysis on just the exome data. This is just LASSO, right? easy to do.
Not really. The hard part of disagreement is just understanding what people mean. The vast majority of the time, I can't tell if there is a disagreement at all. (Really, I don't think most utterances denote anything.)
I believe my opinions because I believe my arguments. It's really important to distinguish people who disagree with me because they have fewer arguments from people who disagree because they have more. It's not generally useful to project them onto a 1-dimensional metric that claims that they agree with each other.
I think giving an argument and then doing a poll of whether people believe the argument (maybe true/unsound/invalid/other) would be more valuable than doing the poll just about the conclusion. I do think that the polls in your more recent futurism post were more useful, but I find it pretty hard to articulate why. Maybe because this one should be dominated by a single argument. If you can identify an argument, that should trump the poll.
Why do a poll on this question?
It makes sense to poll for moral intuition, but for factual questions don't you want arguments, not popularity? Maybe some factual questions in the present day are thick with detail, but not questions like this.
I don't know what the CST was supposed to be, but what it is is exactly the ordinary situation RH mentions as already existing: writing general curriculum is a reward for prestigious specialists. Chicago has larger and more unified general requirement than most schools, but I don't think that the CST makes it very different than, say, Columbia's.
I don't think that the professors in CST talk to each other about their research. The Committee grants PhDs, but I think that they are just as specialized as their advisors.
Maybe this general curriculum is the right way to teach undergrads, but that decision precedes the CST. Maybe U Cambridge is right to train undergrads in "Natural Science," but this impedes their ability to pursue prestigious graduate study!
What problem does the department of generalists solve?
You mention the problem of fights over resources. Does the department of generalists have its own resources? Would the traditional specialist departments encourage their members to join the generalist department because it would free up resources?
Since the professor already has tenure, why not talk about resources directly, ie, grants for generalists? Of course, the grants need evaluation, so if this post is about evaluation, maybe it is just as applicable to grants as to departments.
He defended expensive drug prices in America on the grounds that they go off patent and become cheap. But that's only true outside America!
First there is the issue of small molecule drugs vs biologics. For years people said that biologics were too hard for generics companies. There are three brands of fast insulin, not legally interchangeable. Their prices have gone up every year since they were introduced, with no effect of patent expiration. Outside America, there are generic brands, but inside America they are illegal. Monoclonal antibodies are even harder. A few generic manufacturers have gotten their drugs approved, but not as interchangeable. Finally, I think generic rituximab was approved a few months ago.
And even small molecule drugs have sporadic problems. We used to have cheap generic small molecule drugs, but that system is breaking down. I don't think generic modafinil brought the price down much. And then there have been lots of decades old drugs having shortages.
Philosophers are good at seeing the big picture? Really? Why would anyone think that? Especially someone who agrees that they specialize in microarguments. To have both statements in the same conversation suggests a failure to address the big picture.
Not that I necessarily disagree with the practical suggestion that philosophers go out and change organizations. Better than what academics are doing. Probably outsiders are better at seeing the big picture.
But is there anything special about philosophers, especially philosophy training? Philosophers are smart, but that's just selection. Philosophy training is probably more useful than other humanities training, but that's not saying much. If you want to know how the organization actually work, maybe historian techniques for skepticism and primary sources would be better. Or anthropology, if it still existed.
Your memory of the problem Robin Hanson encountered really minimized it, though inviting a liar to rewrite your memory compounds the problem. Of course it's hard to get clients to adopt new techniques. The big problem is that once they use them, they discover that they don't actually want accurate information and they cancel the markets. Did google continue its prediction market for more than the 10 quarters 2005Q2-2007Q3?
You've had a lot of liars on your podcast. I don't know how much you know about these people before you have them on, but interviewing a salesman is an easily avoided error. Just say no. You two should have just had book club where you talk about Tetlock. There is no reason to believe anything he said that is not in the book, but it is hard not to believe everything you hear and it is better not to be exposed to such sources.
Why do you list "art, entertainment, news" as not having scale? Distribution of information has the biggest returns of scale of anything, so a government might want to get in the business of distributing information and/or pick winners for the production of art, entertainment, and (non-local) news.
Except for the first item, all of these are the government accomplishing what the people want. Governments want taxes and patronage as well. I think that James C. Scott's idea of legibility fits in here somewhere, but maybe downstream of other abstractions.
Hollywood has moved in the direction of a larger share, but this is obscured by the talent having both an "agent" and a "manager." The agent is more of a negotiator and the manager is more of a mentor, but they both do networking.
None of this requires the government. Whoever wants to quarantine newly infected people could also quarantine people who are not newly infected. The only difference is that there is a larger supply of uninfected people than known-infected people, from which to draw volunteers.
Where do you get your definition of variolation?
Almost every source I looked at, including your wikipedia link, claim that the main value of variolation is in selecting mild strains of smallpox (ideally variola minor), not in controlling dose. Insufflation and especially inserting the virus into a cut sound to me like they would produce particularly high initial doses.
This seems exactly backwards to me. Squashing is slightly more difficult than flattening, but it takes much less time. After squashing there are years of contact tracing, which is difficult, but I think easier than anything else I have heard suggested.
If the government is not competent to quarantine sick people in hotels, then it is not competent to quarantine intentionally infected people. Maybe careful variolation is a good idea, but simply infecting people is risking just speeding up the epidemic. Complicated plans increase risks. If the government is too slow to move in the first place, it is too slow to implement complicated plans, too slow and incompetent to notice if the plan isn't working and readjust.
No, I don't feel lucky. I'm not optimistic that the government can implement the simplest plan, but I'm even more pessimistic about everything else.
Yes, the great value in infecting early is targeting health care workers, not this model.
Sure, the headline is true, but quarantining and everything that reduces R0 does increase infection date variance. What actions could you be criticizing? I guess delaying the patient zero just delays the epidemic. Are multiple initial patients better or worse?
Also, delaying infection date buys us time for innovation, like vaccines. Or just replacing broken kits.
What is the training that a theoretical physicist receives? How does it help? How do you know?
The null hypothesis about all of education is that it is just selecting for intelligence and a few other traits (which might be different from field to field). The more specialized the training, the less likely it is to transfer. I could believe that undergrad training is valuable, though how different is the valuable part in physics vs chemistry or math? But grad school? specifically theoretical physics?
There is a high-quality study of false identification, good enough to trump all the other evidence. Before DNA, 15% of rape convictions were the wrong man.
Why not RFID tag cars? phones?
Has there been a million fold increase in programming and UI productivity over your lifetime? I think so. I think that the median code is written using tools a million times better.
In some sense there hasn't been much truly new since about . But there has been steady progress in shifting from slow, error-prone, low-level techniques to high-level techniques. The slowness of this progress is quite mysterious and there is still a lot of room to continue.
Everything you have heard about AT&T is a lie.
AT&T didn't set monopoly prices. The government set prices starting in 1917.
AT&T invented the transistor, but it never sold any. In 1954 the government not only confiscated the patent. Then anyone could sell transistors, anyone but AT&T, which the govern, it banned the company from selling transistors, or anything other than telecom.
An alternative hypothesis for early crazy hours is paying for inputs.
You say that the early period is unproductive, but why do you judge that? Junior lawyers are judged on billable hours, which is following the incentive of the law firm. Similarly management consultants.
If universities are judged on quantity of papers, then it makes sense to judge professors that way. But I think the main driver is that the job of a professor is to bring in grant money, ie, inputs, so that the university can skim off a percentage. The grant money needs an excuse to be spent on. The most lucrative professor is the one who can scale up spending. The easiest way to do this is to parallelize (eg, hire lots of grad students). Generating lots of papers is a proxy for being able to generate lot of paper ideas, which is a prerequisite for parallelizing, to justify lots of grant money.
I find it hard to explain investment banks and medicine this way, though.
What do you mean by "the purpose"? Whose purpose?
Maybe that's why some people like to hire people out of investment banks or biglaw, but why do investment banks and biglaw offer this signaling opportunity? For the first opportunity to hire the candidate? Maybe this makes sense for law, but don't investment bankers churn too much for this to make sense?
And medicine? Maybe if you want to hire the best doctors you have to filter them this way, but why are all doctors legally required to go through this hazing?
I'm not sure what you mean about academia. That academics are judged not just on quality, but also on quantity of papers?
Which is more mysterious, the behavior of the press before 2007 or the behavior after?
I'm not sure that the current attention is about Trump. Sure, that makes a difference, but I think that the new trials were enough to create attention on their own.
Youtube is one of those automated services. Every recent youtube video has a transcript.
Did Corey say that this changed his opinion on the value of linguistics that he learned in the 80s? Did he never hear the 1985 quote "Every time I fire a linguist, the performance of our speech recognition system goes up" from Fred Jelinek?
(I'm not sure I agree with that jump, though. One of Chomsky's claims is simply that humans do have a sense of grammar. Deep learning teaches us to train end-to-end, but introspection of grammar suggests that's not what humans are doing.)
A simple explanation is that there are no capitalists, no return on capital, that the government has nationalized the means of production and the apparent returns on capital are labor income for stewarding the capital, set by the owner of capital with an eye on the incentives, but intentionally not reacting to this shock (or, indeed, adjusting against it).