4. In applications, you want to know if a given object fulfills a certain property. In mathematics, you want to find the category with most objects that fulfill certain property. Think of every object (e.g., a matrix, a compact set, or a group) as a point in space. Theorems mean that certain subsets of that space have certain properties. The subsets can become more and more specific: this is the subset of operators, of linear operators, of self-adjoint linear operators, of compact self-adjoint linear operators. You have more theorems as you become more specific. A definition is a name for a subset of the space. A definition is good when there are a lot of theorems that are True about the subset it refers to and only a few theorems hold when you consider small variations of the subset.
A difference between math research and learning math is that in research the definitions are not set in stone, you might need to change them, so you have to go back and forth
3. Mathematicians and physicists as the developers of the tool and the ones that use the tool. Mathematicians know the tool better, they don’t commit a mistake when they are doing an integral. They know all about the integral and how it works (Lebesgue integration, forms, etc.) The physicist knows how to apply the tool. They just don’t care about the tool itself. They know where it is useful. A mathematician cannot do the role of a physicist: they don’t know what is useful. They can develop useful tools, but even there, they need physics motivation. A theory does not make sense if it is not based on something.
2. When you want to make sure that a calculation is correct, you either do the calculation again or check whether it is valid (eg, by using physics intuition.) Depending on the context, you switch to one or the other. This can be extended. How careful you are at doing the calculation relates to how easy and safe is to check it at the end: you can be careless if you can always tell whether your answer is correct or not. You can see applied physicists being careless, theoretical physicists being somewhat careful, probabilists being more careful, and algebraists being very careful.
1. Being careless in physics is much more fruitful than doing so in math. One of the things that a physicist does is to continuously compare the solution to reality. Not only compare, but be guided by reality: what do I need to show or assume to arrive at this place.
0. General principle in physics. Suppose we want to approximate f(x, y) = A(x) + B(x) + C(y) + D(y) for x and y small, where A and C are of linear order and B and D are quadratic. Naively, the only options I see for approximations are f(x, y) = A(x) + B(x) + C(y) + D(y) or f(x, y) = A(x) + C(y). However, if C(y) = 0, it is physically valid to write f(x, y) = A(x) + D(y). The justification is that we can discard B(x) because A(x) is larger. However, we cannot discard D(y) since C(y) = 0. A related concept: in physics, you sometimes justify your approximation by saying we do the lowest order approximation that gives you nontrivial results. In turn, this expresses the idea that if two theories explain equally well the data, we should choose the simplest one.
-1. In math and physics, symmetries are very important. Symmetries tell us that we can reduce the complexity of a system since two objects that are different in general are the same in this case. Also, when we study objects, we consider the best ways of throwing away information. (Humans do this all the time: we are not processing all the sensory information we receive.) This makes sense: after all, doing math is about solving questions, and solving questions is just about reducing a big space into smaller spaces until you reduced it to only one object, which is the answer to the question.
-2. In theoretical fields, when you understand a concept, you can abstract it and take it as a black box. You don’t need to think through it again. Thinking in theoretical fields may be more tiring than other activities because there is no repetition (another factor is creativity.) This is a by-product of the ubiquity of generality in theoretical fields. If everything is a particular case, there is no use in abstraction. Compare this to having to prepare test tubes in a lab. In part, this is why it is more common to see undergrads in biology research than to see undergrads in math research.
-3. I used to think it was surprising that we can determine the macroscopic behavior of a system whose microscopic behavior is too complicated. But this is not surprising, for example, if the macroscopic state is counting the number of heads in a thousand coin flips. However, we have no hope to say something useful if the macroscopic state is whether each coin is heads or not. The point is that we only care about some macroscopic states, and they are such that many microscopic configurations give the same macroscopic state. We can do science because of this, because there is order in the universe.
-4. For doing math research, it would be useful to have a program that generates random examples to check conjectures. For instance, if A is a positive semi-definite matrix, B is l strongly positive definite, is AB positive definite?
Thanks to Raffi Hotter for a conversation about 4.