Humans can meaningfully express their confidence about uncertain events. Normatively, these beliefs should correspond to Bayesian probabilities. However, it is unclear whether the normative theory provides an accurate description of the human sense of confidence, partly because the self-report measures used in most studies hinder quantitative comparison with normative predictions. To measure confidence objectively, we developed a dual-decision task in which the correctness of a first decision determines the correct answer of a second decision, thus mimicking real-life situations in which confidence guides future choices. While participants were able to use confidence to improve performance, they fell short of the ideal Bayesian strategy. Instead, behaviour was better explained by a model with a few discrete confidence levels. These findings question the descriptive validity of normative accounts, and suggest that confidence judgments might be based on point estimates of the relevant variables, rather than on their full probability distributions.