Parfit produces an apparently plausible view: When benefits are considered as gained pleasures or avoided pains, "to be benefits at all, they must be perceptible." A pleasure must be noticable or it is no pleasure at all. In practice, this belief exists as a "threshold" - a point on the scale of magnitude below which all effects are ignored. Parfit quotes Glover:
"Suppose a village contains 100 unarmed tribesmen eating their lunch. 100 hungry armed bandits descend on the village and each bandit at gun-point takes one tribesman's lunch and eats it. The bandits then go off, each one having done a descriminable amount of harm to a single tribesman. Next week, the bandits are tempted to do the same again, but are troubled by new-found doubts about the morality of such a raid. Their doubts are put to rest by one of their number who does not believe in the principle of divisibility. They then raid the village, tie up the tribesmen, and look at their lunch. As expected, each bowl of food contains 100 baked beans. The pleasure derived from one baked bean is below the descrimination threshold. Instead of each bandit eating a single plateful as last week, each takes one bean from each plate. They leave after eating all the beans, pleased to have done no harm, as each has done no more than sub-threshold harm to each person."
Parfit goes on to explain his view that benefits and harms can be real and valuable, even when imperceptibly small. The view that a benefit is no benefit if it is too small to be noticed is often produced in defence of egoistic actions: overfishing (when considering the effects only on other humans) leading to a collapse in numbers, and pollution can both be justified if we assume that there is some threshold below which a harm does not count. An imperceptible effect, with sufficient extent or repetition, can be very terrible indeed.
This view is used to support the idea (amongst others) that it is not worth voting in elections: the likelihood of making a difference is too small. But consider, if a difference is made, it is likely to be a very valuable one - for the election of government can affect every member of the population for some years. If we assume that one-in-a-million chances can be reasonably ignored, what would we say to the builder of a nuclear power station who uses 1000 components, each with a one-in-a-million chance of catastrophic failure per day? If the consequences have a large potential value (perhaps due to a large extent) or there are many chances for the action to occur, sometimes a consideration is significant, despite its almost-impossible odds.
This assumption is mistaken in cases of overdetermination and coordination problems. "X and Y simultaneously shoot and kill me. Either shot, by itself, would have killed. Neither X nor Y acts in a way whose consequences is that an extra person dies. Given what the other does, it is true of each that, if he had not shot me, this would have made no difference." If we make this mistake, we reach the conclusion that neither X nor Y have acted wrongly.
Parfit also describes another mistake, which he calls the "Share-of-the-Total" view. "Suppose that I could save either J's life or K's arm. I know that, if I do not save J's life, someone else certainly will; but no one else can save K's arm." Should I save someone's life or save someone's arm? We must be careful to choose the best outcome, which may not coincide with our impression of producing the most benefit, or else K's arm will be lost unnecessarily.