It depends what we mean by "torture". For instance, say we are looking for a correlation between two quantities. Say we have 10,000 data points in each series, and that the data is generated randomly, so should have no correlation. If we carry out a regression on all 10,000 points, will will indeed find no correlation. But what if we start looking for subsets of this data for which there is a correlation. Even though the underlying data is completely random (so we should expect no correlation) it will be possible to choose a subset of the data such that we do find a correlation.
This correlation will not "reveal" something of importance. It will be completely spurious. It will be quite easy, for example, to choose 20 data points from the 10,000 such that the correlation is quite high. Any conclusions drawn from this correlation will be totally false. We have "tortured" the numbers into submission.
So the lesson is that we have to have a reason for choosing a small sample size. There may be legitimate reasons for choosing only a subset of the data ... but these reasons must be spelled out and the result must be reproducible. Most importantly, the reason cannot be "if I choose this subset, I get a better correlation".