This paper evaluates how adaptive learning agents weight different pieces of information when forming expectations with a recursive least squares algorithm. The analysis is based on a renewed and more general non-recursive representation of the learning algorithm, namely, a penalized weighted least squares estimator, where a penalty term accounts for the effects of the learning initials. The paper then draws behavioral implications of alternative specifications of the learning mechanism, such as the cases with decreasing, constant, regime-switching, adaptive, and age-dependent gains, as well as practical recommendations on their computation. One key new finding is that without a proper account for the uncertainty about the learning initial, a constant-gain can generate a time-varying profile of weights given to past observations, particularly distorting the estimation and behavioral interpretation of this mechanism in small samples of data. In fact, simulations and empirical estimation of a Phillips curve model with learning indicate that this particular misspecification of the initials can lead to estimates where inflation rates are less responsive to expectations and output gaps than in reality, or “flatter” Phillips curves.